venue
stringclasses 2
values | paper_content
stringlengths 7.54k
83.7k
| prompt
stringlengths 161
2.5k
| format
stringclasses 5
values | review
stringlengths 293
9.84k
|
---|---|---|---|---|
ICLR | Title
Class Prototype-based Cleaner for Label Noise Learning
Abstract
Semi-supervised learning based methods are current SOTA solutions to the noisylabel learning problem, which rely on learning an unsupervised label cleaner first to divide the training samples into a labeled set for clean data and an unlabeled set for noise data. Typically, the cleaner is obtained via fitting a mixture model to the distribution of per-sample training losses. However, the modeling procedure is class agnostic and assumes the loss distributions of clean and noise samples are the same across different classes. Unfortunately, in practice, such an assumption does not always hold due to the varying learning difficulty of different classes, thus leading to sub-optimal label noise partition criteria. In this work, we reveal this long-ignored problem and propose a simple yet effective solution, named Class Prototype-based label noise Cleaner (CPC). Unlike previous works treating all the classes equally, CPC fully considers loss distribution heterogeneity and applies class-aware modulation to partition the clean and noise data. CPC takes advantage of loss distribution modeling and intra-class consistency regularization in feature space simultaneously and thus can better distinguish clean and noise labels. We theoretically justify the effectiveness of our method by explaining it from the Expectation-Maximization (EM) framework. Extensive experiments are conducted on the noisy-label benchmarks CIFAR-10, CIFAR-100, Clothing1M and WebVision. The results show that CPC consistently brings about performance improvement across all benchmarks.
1 INTRODUCTION
Deep Neural Networks (DNNs) have brought about significant progress to the computer vision community over past few years. One key to its success is the availability of large amount of training data with proper annotations. However, label noise is very common in real-world applications. Without proper intervention, DNNs would be easily misled by the label noise and yield poor performance.
In order to improve the performance of DNNs when learning with noise labels, various methods have been developed (Liu et al., 2020; Li et al., 2020a; Reed et al., 2014; Nishi et al., 2021). Among them, semi-supervised learning based methods (Nishi et al., 2021; Li et al., 2020a) achieve the most competitive results. The semi-supervised learning methods follow a two-stage pipeline. They first model the loss distribution of training samples to construct a noise cleaner based on the “small-loss prior” (Han et al., 2020), which says in the early stage of training, samples with smaller crossentropy losses are more likely to have clean labels. The prior is widely adopted and demonstrated to be highly effective in practice (Han et al., 2020). Given the noise cleaner, the training samples are divided into a labeled clean set and an unlabeled noise set. Then, semi-supervised learning strategies like MixMatch (Berthelot et al., 2019) are employed to train DNNs on the divided dataset.
The key to their performance lies in the accuracy of the label-noise cleaner (Cordeiro et al., 2022). Usually, a single Gaussian Mixture Model (GMM) (Li et al., 2020a) is used to model the loss distribution of all the training samples across different categories. However, this modeling procedure is class-agnostic, which assumes a DNN model has the same learning speed to fit the training samples in different categories, thus the same loss value on samples in different categories can reflect the same degree of noise likelihood.
Unfortunately, such assumption does not hold in practise. In Fig. 1, we present the cross-entropy loss distribution of training samples at the end of DNNs warm-up period. We conduct Kolmogorov-
Smirnov test (Massey Jr, 1951) to quantify the loss distribution difference between the samples in each class and samples in the whole dataset. The results show that for 54% categories in CIFAR-100 under 90% symmetric noise, the p-value is lower than 0.051 for the hypothesis test that the probability distribution of clean samples in the class is the same with the probability distribution of clean samples in the whole dataset, while the number in the case of noise samples is 53%. Therefore, the class-agnostic label noise cleaner, which establishes a overly rigid criterion shared by all the classes, would introduce more noise samples to the clean set while reject clean samples, and consequently get the model perform poorly. A straightforward remedy to the problem is to fit distinct GMMs to losses of samples in different classes respectively, yielding a class-aware GMM cleaner. Nevertheless, this class-aware modeling strategy implicitly assumes that label noise is existed in every class. In the case of asymmetric noise e.g., CIFAR10-asym40%, where samples in parts of classes are clean, such a naive strategy would classify most of hard samples in the clean classes as noise, and results in negative affect on model training.
Considering that images in the same category should share similar visual representations, the similarity between a sample and the cluster center (e.g., class prototype) of its labeled class is helpful for recognizing label noise. In this paper, we propose a simple Class Prototype-based label noise Cleaner (CPC) to apply class-aware modulation to the partitioning of clean and noise data, which takes advantage of intra-class consistency regularization in feature space and loss distribution modeling, simultaneously. CPC learns embedding for each class, i.e., class prototypes, via intra-class consistency regularization, which urges samples in the same class to gather around the corresponding class prototype while pushes samples not belonging to the class away. Unlike the aforementioned naive class-aware GMM cleaner, CPC apply class-aware modulation to label noise partitioning via representation similarity measuring without assuming that label noise is existed in every class, which is more general for different label noise scenarios. Meanwhile, CPC leverages the “small-loss prior” to provide stronger and more robust supervision signals to facilitate the learning of prototypes.
We plug CPC to the popular DivideMix(Li et al., 2020a) framework, which iterates between label noise partitioning and DNNs optimization. With the stronger label noise cleaner in the first stage, DNNs can be trained better in the second stage, which would further improve the learning of class prototypes. We theoretically justify the procedure from Expectation-Maximization algorithm perspective, which guarantees the efficacy of the method. We conduct extensive experiments on multiple noisy-label benchmarks, including CIFAR-10, CIFAR-100, Clothing1M and WebVision. The results clearly show that CPC effectively improves accuracy of label-noise partition, and brings about consistently performance improvement across all noise levels and benchmarks.
The contribution of our work lie in three folds: (1) We reveal the long-ignored problem of classagnostic loss distribution modeling that widely existed in label noise learning, and propose a simple yet effective solution, named Class Prototype-based label noise Cleaner (CPC); (2) CPC takes advantage of loss distribution modeling and intra-class consistency regularization in feature space si-
1A p-value < 0.05 suggests the probability that the class-wise loss distribution are the same with the global loss distribution is lower than 5%.
multaneously, which can better distinguish clean and noise labels; (3) Extensive experimental results show that our method achieves competitive performance compared to current SOTAs.
2 RELATED WORK
Recent advances in robust learning with noisy labels can be roughly divided into three groups. (a) Label correction methods aim to translate wrong labels into correct ones. Early studies rely on an auxiliary set with clean samples for clean label inference (Xiao et al., 2015a; Vahdat, 2017; Li et al., 2017b; Lee et al., 2018). Recent efforts focus on performing label correction procedures without supervision regarding clean or noise labels. (Yi & Wu, 2019a; Tanaka et al., 2018) propose to jointly optimize labels during learning model parameters. Li et al. (2020b) propose to correct corrupted labels via learning class prototypes and utilize the pseudo-label generated by measuring the similarity between prototypes and samples to train model. Wu et al. (2021) and Li et al. (2021) introduce neighbouring information in feature space to correct noise label, and propose a graphbased method and a class prototype-based method, respectively. (b) Sample selection methods select potential clean samples for training to eliminate the effect of noise labels on learning the true data distribution. (Han et al., 2018; Jiang et al., 2018; 2020; Yu et al., 2019) involve training two DNNs simultaneously and focus on the samples that are probably to be correctly labeled. (c) Semisupervised learning methods conceal noise labels and treat these samples as unlabeled data (Ding et al., 2018). DivideMix (Li et al., 2020a) is a typical algorithm among these works, which compromises an unsupervised label noise cleaner that divides the training data to a labeled clean set and an unlabeled noise set, followed by semi-supervised learning that minimize the empirical vicinal risk of the model. Inspired by DivideMix, a series of methods (Cordeiro et al., 2022; Nishi et al., 2021; Cordeiro et al., 2021) are proposed, which achieve SOTA performance. However, all these methods rely on the class-agnostic loss distribution modeling to achieve the label noise cleaner, which hinders the performance of the model. The class-agnostic loss distribution modeling implicitly assumes a DNN model has the same learning speed to memory training samples in different categories. However, in reality, the memorization speed are actually different and will cause the the problem of under learning in hard classes as revealed by Wang et al. (2019). In this paper, we focuses on another problem, i.e., class agnostic loss distribution modeling problem caused by the issue in the context of label noise cleaner. In our method, we propose the simple yet effective class prototype-based label noise cleaner to solve the problem. Besides, compared to previous prototype-based label noise learning methods (Li et al., 2020b; 2021), our method are different from them in two folds: (1) we utilize prototypes as label noise cleaner to effectively improve the semi-supervised learning based methods; (2) CPC takes advantage of both loss distribution modeling and intra-class consistency regularization in feature space simultaneously which learns better prototypes.
3 PRELIMINARY
In label noise learning, given a training set D = (X,Y ) = {(xi, yi)}Ni=1, where xi is an image and yi ∈ {1, 2, ...,K} is the annotated label over K classes, the label yi could differ from the unknown true label ŷi. In this paper, we follow the popular label noise learning framework DivideMix (Li
et al., 2020a), which first warms up the model for a few epochs by training on all the data using the standard cross-entropy loss, and then trains the model by iterating a two-stage pipeline. The pipeline comprises an unsupervised label cleaner Q to divide training samples into a labeled set for clean data X and an unlabeled set for noise data U , followed by a semi-supervised learning stage that trains the model to minimise the empirical vicinal risk (EVR) (Zhang et al., 2017):
ℓEV R = 1 |X ′| ∑ X ′ ℓX ′(p(ỹ′i|x ′ i), y ′ i) + λ |U ′| ∑ U ′ ℓU ′(p(ỹ′i|x ′ i), y ′ i), (1)
where X ′ and U ′ indicate MixMatch (Berthelot et al., 2019) augmented clean and noise set. lX ′ and lU ′ denote the losses for samples in set X ′ and U ′, which are weighted by λ. p(ỹ′i|x′i) is the softmax output of DNNs, where ỹ′i is the predicted label. For more details about EVR, please refer to the appendix A.1.
In Li et al. (2020a), the unsupervised label cleaner is operated under the “small-loss prior”, which is widely adopted and demonstrated to be highly effective (Han et al., 2020). The prior assumes that in the early stage of training, samples with smaller cross-entropy losses are more likely to have clean labels. The well known insight behind the “small-loss prior” is that DNNs tend to learn simple patterns first before fitting label noise (Arpit et al., 2017). Given a training sample xi and the softmax output p(ỹi|xi) of DNNs, where ỹi is the predicted label, the cross-entropy loss l(p(ỹi|xi), yi) reflects how well the model fits the training sample.
To achieve the unsupervised label cleaner Q, a two-component Gaussian Mixture Model (GMM) is employed to fit the loss distribution of all training samples, i.e., ℓ(p(ỹi|xi), yi) ∼ ϕ0N (µ0, σ0) + ϕ1N (µ1, σ1), where µ0 < µ1, and ϕ is a mixing coefficient. The component with smaller mean represents the distribution of clean samples and the other one is for noise samples. We use zi ∈ {0, 1} indicates the data is clean or not. Then, q(zi = 0) represents the clean probability of xi, which is the posterior probability of its loss belonging to the clean component. The label cleaner is shared by training samples across different classes, which is actually class-agnostic. A hypothesis implicitly accompanying this loss distribution modeling method is ignored by current works, which assumes the loss distributions of clean and noise samples are consistent across different categories. Unfortunately, as illustrated in Fig.1, the hypothesis dose not hold in practise. In this paper, we propose the class prototype-based label noise cleaner which applies class-aware modulation to the partitioning of clean and noise data and improves label noise learning.
4 METHODOLOGY
4.1 OVERVIEW
Our method follows the two-stage label noise learning framework DivideMix (Li et al., 2020a) and improves the framework with the proposed CPC. CPC comprises class prototypes C = {ck ∈ R1×d|k = 1, 2, ...,K}, where ck indicates the prototype of k-th class and d is the dimension of prototype embedding. Our DNN model consists of a CNN backbone, a classifier head and a projection layer. The backbone maps an image input xi to a feature vector vi ∈ R1×D. The classifier takes vi as input and outputs class prediction p(ỹi|xi). The projection layer serves to project the high dimension feature vi to a low-dimensional embedding v′i ∈ R1×d, where d < D. As shown in Fig. 2, we update the DNN as well as the CPC by iterating a two-stage training pipeline in every epoch. In the first stage, we update CPC as well as the projector in DNN, and utilize the updated CPC to partition label noise. We first calculate the cross-entropy loss of every training sample and fits a GMM to the losses. We utilize the GMM as a label noise cleaner to get a labeled clean set XGMM and a unlabeled noise set UGMM . The data partition XGMM and UGMM are utilized to update the prototypes in CPC and parameters in the projector. Note that we cut off the gradient back-propagation from the projector to the CNN backbone. Then, the updated CPC is employed to re-divide the training data into another two set X and U . In the second stage, we train DNN model to minimise the EVR in Eq. (1) with data partitioned by the cleaner. In the first e epochs, we wait CPC to warm up, and minimise the EVR of DNNs based on training data partitioned by the GMM cleaner. After the e-th epoch, the label noise estimation results of CPC, i.e., X and U are employed to train DNNs, while the estimation results of GMM cleaner are only used to update
prototypes in CPC. In inference, we utilize DNN classifier for image recognition, directly. In A.5, we further delineate the full framework.
4.2 CLASS PROTOTYPE-BASED LABEL NOISE CLEANER
In order to apply class-aware modulation to the label noise partitioning, we propose to learn an embedding space where samples from the same class are aligned with their class prototypes, and leverage the prototypes to recognize noise labels. The prototypes are typically learnt with intra-class consistency regularization, which urges samples in the same class to align with the corresponding class prototype while keeping samples not belonging to the class away. Previous methods (Wang et al., 2022; Li et al., 2020b) apply the intra-class consistency regularization to prototype learning via unsupervised contrastive objectives, e.g., prototypical contrastive objective (Li et al., 2020c), where the unsupervised training labels are typically determined by the similarity between samples and prototypes. The accuracy of the training labels are highly depends on the quality of representation learnt by the CNN encoder, which would be too low to effectively update the prototypes, especially in the early stage of training. In contrast, we empirically find that the GMM cleaner, which is operated under the well evaluated “small-loss prior”, are not as sensitive as the prototypes to the representation quality, and can provide more robust and accurate training labels.
Therefore, we propose to take samples in clean set XGMM as positive samples and those in noise set UGMM as negative samples to update prototypes. Specifically, given the feature embedding v′i of a sample xi from XGMM, we update prototypes C as well as the parameters of the projector to maximize the score q(zi = 0) between ck=yi and v ′ i, and minimize the score between ck ̸=yi and v ′ i via minimize LXGMM :
LXGMM = − 1 |XGMM | ∑
XGMM K∑ k=1 ℓk(v ′ i, yi), where
ℓk(v ′ i, yi) =
{ log(sigmoid(v′ic ⊤ k )), k = yi,
λneg log(1− sigmoid(v′ic⊤k )), k ̸= yi,
(2)
where λneg = 1K weights the losses between positive pair and negative pairs to avoid under-fitting the positive samples. Given v′i of a sample xi from UGMM, we update prototypes ck as well as the parameters of the projector to minimize the score q(zi = 0) between ck=yi and v ′ i via minimizing LUGMM :
LUGMM = − 1 |UGMM | ∑
UGMM log(1− sigmoid(v′ic⊤k )), where k = yi. (3)
At last, for noise samples in UGMM with high classification confidence, the samples are more likely to belong to the class predicted by DNNs, which is potentially valuable to the update of prototypes. Therefore, we collect such training samples XP from UGMM taking the averaged classification confidence of samples in XGMM as the threshold. Specifically, given a sample in UGMM with the label predicted by DNNs k = maxk(p(ỹi|xi)), the sample is collected into XP if p(ỹi|xi)k > average({p(ỹi|xi)k|(xj , yj |yj = k) ∈ XGMM}). Then, we update the prototypes and projectors to minimize L(XP ):
LXP = − 1 |XP | ∑ XP log(sigmoid(v′ic ⊤ k )),where k = max k (p(ỹi|xi)). (4)
The overall empirical risk LC for prototypes and the projector is as follows: LC = LXGMM + LUGMM + αLXP , (5) where α is the weight scalar.
CPC distinguishes a clean sample (xi, yi) with the score q(zi = 0) = sigmoid(v′ic ⊤ k=yi ) and the threshold τ . Samples with q(zi = 0) > τ are classified as clean, and otherwise as noise.
4.3 THEORETICAL JUSTIFICATION ON THE EFFICACY OF CPC
We provide theoretical justification on the efficacy of CPC from the perspective of ExpectationMaximization algorithm, which guarantees that though CPC does not follow the classical prototypical contrastive objective, it can still learn meaningful prototypes and act as an effective cleaner.
We consider training data with label noise D = (X,Y ) = (xi, yi) N i=1 as the observable data, and Z ∈ {0, 1}N as the latent variable, where zi = 0 iff (xi, yi) is clean (i.e., yi = ŷi). The prototypes C in the cleaner are taken as parameters expected to be updated. Then, the negative log likelihood for D given C is as follows:
NLL(D|C) = − ∑ D log ∑
zi∈{0,1}
p(xi, yi, zi|C) = − ∑ D log ∑
zi∈{0,1}
q(zi) p(xi, yi, zi|C)
q(zi) , (6)
where q(zi) = p(zi|xi, yi, C). According to the Bayes theorem and Jensen’s inequality , we have NLL(D|C) = − ∑ D log ∑
zi∈{0,1}
q(zi)p(xi, yi|C),
≤ − ∑ D ∑ zi∈{0,1} q(zi) log p(xi, yi|C)
= − ∑ D ∑ zi∈{0,1} q(zi) log p(yi|C, xi) + const,
(7)
where − ∑
D ∑ zi∈{0,1} q(zi) log p(yi|C, xi) is the upper bound of NLL(D|C). Typically, we can
adopt the EM algorithm to find the prototypes C that minimize the upper bound by iterating:
E-step: Compute a new estimate of q(zi) (i.e., clean or noise) according to prototypes Cold from the last iteration: q(zi) = p(zi|xi, yi, Cold). (8) M-step: Find the prototypes C that minimizes the bound:
Cnew = argmin C − ∑ D ∑ zi∈{0,1} q(zi) log p(yi|C, xi) (9)
In our method, in order to introduce the “small-loss prior” to provide stronger and more robust supervision signals to the learning of CPC, in the E-step, we estimate the distribution of clean or noise of samples, denoted as q(z′i), via the GMM cleaner instead of q(zi) in Eq. (8). And consequently, we replace the q(zi) in Eq. (9) to q(z′i) and find the prototype C minimize the bound. Next, we provide the justification that the EM algorithm still work by proving that q(z′i) can be considered as an approximation to q(zi) in our framework.
In our method, q(z′i) = p(z ′ i|l(p(ỹi|xi), yi)), where ỹi ∼ p(ỹi|xi, θ), which is the label predicted by the DNN parameterized by θ. As introduced in section 4.1, in the first stage of each epoch, the CPC’s estimation results zi ∼ q(zi) are utilized to divide training samples into a labeled set for clean data X = {(xi, yi)|zi = 0} and an unlabeled set for noise data U = {(xi, yi)|zi = 1}. Then the parameters of DNNs, which we denote as θ, are optimized using Eq. (1) in the second stage. There exists an optimal θ∗ with respected to zi, with which the softmax output p(ỹi|xi) of DNNs satisfies:
ℓ(p(ỹi|xi), yi) = 0, if zi = 0, otherwise 1, (10) where ℓ(p(ỹi|xi), yi) is the cross-entropy loss between the network prediction and the annotated label. With these loss values, the subsequent GMM cleaner can easily distinguish samples of X from samples of U . In other words, under the optimal θ∗, the estimation of the GMM cleaner would be consistent with the partition of CPC, i.e., z′i = zi. In practice, in each epoch, we takes the θ optimized to minimize Eq. (1) as an approximation to the optimal θ∗ with respect to zi, and consequently we can get q(z′i) as an approximation to q(zi). Therefore, we can see that with the “small loss prior” introduced into the prototype learning, the EM optimization procedure would still work, which guarantees CPC can learn meaningful prototypes and act as an effective cleaner. In appendix A.4, we further present more details and empirical results to demonstrate the approximation is hold in practice.
5 EXPERIMENTS
5.1 DATASETS AND IMPLEMENTATION DETAILS
Datasets. We evaluate our method on the following popular LNL benchmarks. For CIFAR-10 and CIFAR-100 (Krizhevsky et al., 2009), we experiment with two types of synthetic noise: symmetric
and asymmetric, which are injected into the datasets following the standard setup in (Li et al., 2020a). Clothing1M (Xiao et al., 2015b) and WebVision1.0 (Li et al., 2017a) are two large-scale real-world label noise benchmarks. Clothing1M contains 1 million images in 14 categories acquired from online shopping websites, which is heavily imbalanced and most of the noise is asymmetric (Yi & Wu, 2019b). WebVision1.0 contains 2.4 million images crawled from the web using the concepts in ImageNet-ILSVRC12 (ILSVRC12). Following convention, we compare with SOTAs on the first 50 classes of WebVision, as well as the performance after transferring to ILSVRC12.
Implementation details. We plug the proposed CPC to the DivideMix (Li et al., 2020a) framework. For Clothing1M and CIFAR-10 with asymmetric noise, we employ a single class-agnostic GMM for loss-distribution modeling. For other cases, we find that class-aware GMMs would further improve the performance of CPC. Following DivideMix, we employ ResNet18 (He et al., 2016) for CIFAR10 and CIFAR-100, and utilize ImageNet pre-trained ResNet-50 for Clothing1M. Since previous works chose different backbones, e.g., Inception-resnet v2 (Szegedy et al., 2017) and ResNet-50, we adopt the weaker one, i.e., ResNet-50 according to (Zheltonozhskii et al., 2021), and train it from scratch for fair comparison. The threshold of CPC τ is set 0.5 by default for all the datasets except for the extremely imbalanced Clothing1M where it is set to 0.3. For CIFAR-10 and CIFAR100, we train the models for 450 epochs. For the large-scale dataset Clothing1M and WebVision1.0, we train the model for 80 and 100 epochs, respectively. The warm-up periods of prototypes for all the datasets is set to the first 5% epochs after network warm-up, except in CIFAR-100 with noise ratios larger than 80% when set to 10% of total epochs. For the other settings, we simply follow the standard set-up as in DivideMix. For more implementation details, please refer to the appendix A.2 and codes in supplementary materials.
5.2 COMPARISON WITH STATE-OF-THE-ART METHODS
Real-world noise benchmarks. We evaluate our method on real-world large scale data sets, and compare our method with latest SOTA label noise learning methods, including DivideMix(Li et al., 2020a), LongReMix(Cordeiro et al., 2022), NGC(Wu et al., 2021), GJS(Englesson & Azizpour, 2021), ELR+(Liu et al., 2020), AugDMix(Nishi et al., 2021) and NCR(Huang et al., 2021). For WebVision, we measure the top1 and top5 accuracy on WebVision validation set and ImageNet ILSVRC12 validation set. We take ResNet50-based DivideMix (Zheltonozhskii et al., 2021) as baseline. As shown in Table 1, our CPC improves top1 and top5 accuracy over baseline model on WebVision by 3.33% and 2.81%, respectively. Our method achieves competitive performance on WebVision, and shows stronger transferable capability, outperforming other competitors on the ILSVRC12 validation set significantly. For Clothing1M, we apply the strong augmentation strategy (Nishi et al., 2021) to DivideMix as our baseline, and rerun the method three times. Our method achieves 75.4% accuracy on this challenging benchmark, outperforming all the other SOTAs. We also notice that though NCR achieves SOTA result on WebVision, it shows moderate performance compared to ELR+, DivideMix and AugDMix on Clothing1M containing asymmetric noise with imbalanced data distribution. It reveals that our method could be more robust across different label noise scenarios.
Synthetic noise benchmarks. We evaluate the performance of CPC on CIFAR-10 and CIFAR-100 datasets with symmetric label noise level ranging from 20% to 90% and asymmetric noise of rate 40%. We take AugDMix as the baseline, and compare our method with latest SOTA methods, where DivideMix, LongReMix and Aug-DMix are semi-supervised learning based methods. Following NGC and GJS, we run our method three times with different random seeds and report the mean and standard deviation. For other methods, e.g., ProtoMix (Li et al., 2021), we report the best results reported in their papers. As shown in Table 2, though with a baseline method as strong as AugDMix, our method brings about performance improvement across all noise levels as well as noise types consistently, and establishes new SOTAs on CIFAR-10 and CIFAR-100. Additionally, we notice that, under asymmetric noise set-up, semi-supervised learning based methods consistently outperform other methods that achieve SOTA results on WebVision benchmark,including NGC, GJS and NCR. The results reveal that semi-supervised learning based method could be more robust to asymmetric noise, while our method achieves SOTA performance among them.
5.3 ANALYSIS
Is CPC a better label noise cleaner? We evaluate the performance of label noise cleaner under both symmetric and asymmetric label noise set-ups. For symmetric noise, we use CIFAR-100 with 90% noise as benchmark to reveal the relationship between CPC and the significant performance improvement under this set-up. For asymmetric noise, we employ the most commonly adopted CIFAR-10-asym40% as benchmark. The AUC of clean/noise binary classification results of a cleaner is calculated as the evaluation metric. We take the original class-agnostic GMM cleaner (GMMagn) proposed in DivideMix as baseline, and compare it to our CPC and the aforementioned naive class-aware GMM cleaner (GMMawr). Furthermore, we also implement another version of CPC that trained based on the class-aware GMM cleaner. To distinguish these two CPC, we denote the regular one trained based on conventional class-agnostic GMM cleaner as CPCagn, and the other one as CPCawr. As shown in Figure 3, in both cases, the regular CPCagn outperforms the baseline GMMagn as well as GMMawr, which demonstrates our class prototype-based method is the better label noise cleaner. As for the comparison between GMMagn and GMMawr, we find that in the situation of high symmetric noise, though GMMagn shows better performance in the early stage of training, GMMawr outperforms it in the second half stage of training. In the case of asymmetric noise, GMMawr, which tend to classify hard clean samples in clean categories as noise wrongly, consistently underperforms GMMagn across the whole training period. The results further prove that our class prototype-based method is the better choice for applying class-aware modulation to label noise cleaning, which is more robust across different noise types. Moreover, we find that in the case of asymmetric noise, CPCagn achieves higher AUC compared to GMMagn, which shows our method can partially make up for the shortcomings of GMMagn. In the case of symmetric noise, we find that GMMagn can further improve the performance of CPC, where CPCawr achieves the best performance among the four cleaners.
How do different label noise cleaners affect label noise learning? We plug different cleaners to DivideMix framework, and keep all the other training settings the same as described in the implementation details. As shown in Table 3, the final performance of the model is consistent with the performance of the cleaner used. On CIFAR-100 with 90% symmetric noise, performance improvement bought about by CPCagn are 7.68%, while model with CPCawr outperforms the baseline method by 13.4%. We also report the comparison results on large-scale WebVision dataset, where the performance of different models show the same trend of change as in CIFAR-100-sym90%. As for the asymmetric noise situation, i.e., CIFAR-10-asym40% and Clothing1M, model with CPCagn, which has superior label noise partitioning capability as shown in Fig.3, achieves best performance while CPCawr beat GMMawr in both cases. The results demonstrate that CPC is helpful to train a better model in label noise learning.
Is the GMM cleaner beneficial to the learning of prototypes? In our method, we propose to leverage the GMM cleaner to facilitate the learning of prototypes via the “small loss prior”. To validate the effectiveness of our method, we first compare the quality of prototypes learnt in CPC with prototypes learnt in another prototype-based label noise learning method MoPro (Li et al., 2020b). We take WebVision as benchmark and utilize prototypes to classify test samples via measuring the similarity between samples and prototypes. The results show that, on the first 50 classes of WebVision, our prototype achieves a top1 accuracy of 78.44%, while MoPro’s accuracy is 72.23%, which demonstrates that our method is able to learn better prototypes. To further verify the contribution of the GMM cleaner, we remove the GMM cleaner and learn class prototypes in CPC via the typical prototypical contrastive objective as in MoPro. In experiments, we find that without the help of the GMM cleaner, the learnt prototypes generate less accurate data partition that further drawing back the overall training framework for DNNs, which proves the benefits of the GMM cleaner to our method. For more details and discussion, please refer to A.3.
6 CONCLUSION
In this paper, we reveal the long-ignored problem of class-agnostic loss distribution modeling that widely existed in label noise learning, and propose a simple yet effective solution, named Class Prototype-based label noise Cleaner (CPC). CPC takes advantage of loss distribution modeling and intra-class consistency regularization in feature space simultaneously, which can better distinguish clean and noise labels. We justify the effectiveness of our method by explaining it from the EM algorithm perspective theoretically and providing extensive empirical proves. The experimental results show that our method achieves competitive performance compared to current SOTAs.
A APPENDIX
A.1 EMPIRICAL VICINAL RISK
We introduce the Empirical Vicinal Risk following Cordeiro et al. (2022). In the semi-supervised learning based label noise learning framework, with the labeled set X and unlabeled set U from a cleaner, the DNNs are trained to minimise the empirical vicinal risk (EVR) (Zhang et al., 2017):
ℓEV R = 1 |X ′| ∑ X ′ ℓX ′(p(ỹ′i|x ′ i), y ′ i) + λ(U ′) |U ′| ∑ U ′ ℓU ′(p(ỹ′i|x ′ i), y ′ i), (11)
where lX ′ and lU ′ denote the losses for set X ′ and U ′, which are weighted by λ(U ′). X ′ and U ′ indicate MixMatch (Berthelot et al., 2019) augmented clean and noise set:
X ′ = (x′i, y′i) : (x′i, y′i) ∼ f(x′i, y′i|xi, yi), (xi, yi) ∈ X , U ′ = (x′i, y′i) : (x′i, y′i) ∼ f(x′i, y′i|xi, yi), (xi, yi) ∈ U ,
(12)
with
f(x′i, y ′ i|xi, yi) =
1 |X ∪ U| ∑ X∪U Eλ[δ(x′i = λxi + (1− λ)xj , y′i = λyi + (1− λ)yj)], (13)
where δ is a Dirac mass centered at (x′, y′), λ ∼ Beta(a, a), and a ∈ (0,+inf).
A.2 OTHER TRAINING DETAILS
A.2.1 TRAINING CONFIGURATIONS
In our method, we follow most of training set-up of DivideMix(Li et al., 2020a). We present the detailed training configures as follows:
• CIFAR-10 and CIFAR-100. For all the experiments on CIFAR, we train our DNN model as well as class prototypes in CPC via SGD with a momentum of 0.9, a weight decay of 0.0005, and a batch size of 128. The network is trained for 450 epochs. We set the initial learning rate as 0.02, and reduce it by a factor of 10 after 225 epochs. The warm up period for the DNN is 10 epochs. The weight λ(U
′) is set to {0,25,50,150} as in DivideMix. • Clthing1M. We train our DNN model as well as class prototypes in CPC via SGD with a
momentum of 0.9, a weight decay of 0.001, and a batch size of 32. The model is trained for 80 epochs. The warm up period for the DNN is 1 epoch. The initial learning rate is set as 0.002 and reduced by a factor of 10 after 40 epochs. For each epoch, we sample 1000 mini-batches from the training data. The weight λ(U ′) is set to 0.
• WebVision. We train our DNN model as well as class prototypes in CPC via SGD with a momentum of 0.9, a weight decay of 0.001, and a batch size of 32. The model is trained for 100 epochs. The warm up period for the DNN is 1 epoch. The initial learning rate is set as 0.01 and reduced by a factor of 10 after 50 epochs. For each epoch, we sample 1000 mini-batches from the training data. The weight λ(U ′) is set to 0.
A.2.2 HYPER-PARAMETER STUDY
In this paper, we mainly follow the tuning procedure as in DivideMix to determine the newly introduced hyper-parameters. First of all, we initialize the hyper-parameters to e = 5%, τ = 0.5, α = 1.
Then, for the large scale real world benchmark Clothing1M and WebVision, the hyper-parameter tuning is done on the validation set of Clothing1M and transferred to WebVision. For CIFAR, a small validation set with clean data is split from training data for hyper-parameter tuning. Due to the diversity of experimental set-ups, it would be an irritating task to tune hyper-parameters for each experimental set-up, respectively. Therefore, we only tune the hyper-parameters under CIFAR-100(sym80%) and CIFAR-100(sym50%), and transfer the hyper-parameters obtained under CIFAR-100(sym80%) to the noisier set-up i.e., CIFAR-100(sym90%), and those obtained under CIFAR-100(sym50%) to the less challenge set-ups i.e., noise ratio lower than 50% and all noise ratio on CIFAR-10.
In practical, when a clean validation set is inaccessible, it would be the difficult to tune the hyperparameters. To shed some light to the hyper-parameter set-up in these cases, we try to conclude some empirical solutions via studying the variation of performance of CPC with respect to the newly introduced hyper-parameters on different benchmarks. According to experimental results, we find that CPC is robust in the choice of hyperparameters in the range listed in Tab.4. Generally, e = 5%/10%, τ = 0.5, α = 0/1 can be a good choice in most cases.
A.3 DISCUSSION ON THE CONTRIBUTION OF GMM CLEANER TO CPC
In typical prototypical contrastive objective, the unsupervised training labels are determined by similarity between samples and prototypes. Compared to it, we empirically find that GMM cleaner provides more accurate training labels for prototypes, especially in the early stage of training. For example, in CIFAR-10(asym-40%), the averaged accuracy of training labels from GMM cleaner is 9.7% higher during the CPC warming up period.
To evaluate the contribution of GMM cleaner in our framework, we further present ablation study results in Tab. 5. For CPC w/o GMM Cleaner, we remove the GMM cleaner and learn class prototypes in CPC with prototypical contrastive objective as in MoPro (Li et al., 2020b). In experiments, we find that without the help of the GMM cleaner, the learnt prototypes generate less accurate data partition that further drawing back the overall training framework for DNNs as shwon in Tab. 5. The situation is especially severe on the challenging benchmark with more diverse data, e.g., WebVision. The results demonstrate the benefits of the GMM cleaner in our method.
To prove the superiority of our method, we also compare the quality of prototypes learnt in our method with prototypes learnt in MoPro (Li et al., 2020b) on the first 50 classes of WebVision. To evaluate the quality of prototypes learnt in CPC, we utilize the prototypes to classify test samples via measuring the similarity between samples and prototypes. We implement the experiment with the official code released by the MoPro team. The results show that our prototype achieves a top1 accuracy of 78.44%, while MoPro’s accuracy is 72.23%. The result demonstrates that our method is able to learn better prototypes.
A.4 SUPPLEMENTARY DISCUSSION ON THE THEORETICAL JUSTIFICATION
A.4.1 IS q(z′i) A PROPER APPROXIMATION TO q(zi) IN PRACTICAL?
In Section 4.3, we replace the estimation of CPC q(zi) in Eq. (9) with the estimation of GMM cleaner q(z′i) and justify q(z ′ i) can be considered as an approximate to q(zi). To investigate if the approximation holds in practical, we calculate the K-L Divergence as well as classification consistency between q(z′i) and q(zi). As shown in Figure 4, as the training going on, the KLD between q(z′i) and q(zi) is converged and the classification consistency increases.
A.4.2 TRAINING PROTOTYPES WITH LC IS AN APPROXIMATION TO THE M-STEP IN EM
As illustrated in Section 4.3, in order to introduce the “small-loss prior” to provide stronger and more robust supervision signals to the learning of CPC, in the E-step, we estimate the probability distribution of clean or unclean of samples, denoted as q(z′i), via the GMM cleaner, which is an approximation to the q(zi) in Eq. (8). And consequently, we replace the q(zi) in Eq. (9) with q(z′i) and find the prototype C to minimize the bound, which makes the loss function LC in Eq. (5) an approximation to Eq. (9). The detailed analysis on the relationship between Eq. (5) and Eq. (9) is as follows.
Firstly, we replace the estimation of CPC q(zi) in Eq. (9) with the estimation of GMM cleaner q(z′i) which is a justified approximate to q(zi):
Cnew = argmin C − ∑ D ∑ zi∈{0,1} q(zi) log p(yi|C, xi)
≈ argmin C − ∑ D ∑ z′i∈{0,1} q(z′i) log p(yi|C, xi)
= argmin C − ∑ D [q(z′i = 0) log p(yi|C, xi) + q(z′i = 1) log p(yi|C, xi)]
(14)
In Eq. (5), q(z′i) is quantified to 1 and 0 by the threshold τ , which makes it a “hard” version to Eq. (14). Specifically, the first term in Eq. (14) updates the prototypes C to better align the samples, that classified as clean, with labeled class prototypes. It is equivalent with the effect of Eq. (5) to positive samples, where:
l = log(sigmoid(v′ic ⊤ k )), k = yi, z ′ i = 0 (15)
where v′i is the embedding of sample xi. The second term in Eq. (14) updates C to prevent the samples, that classified as noise, aligning with labeled class prototypes so as to better recognize the sample as noise (i.e., z′i = 1), which is equivalent with the effect of Eq. (5) reducing the probability of negative samples to be recognized as clean:
l = log(1− sigmoid(v′ic⊤k )), k = yi, z′i = 1 (16)
A.5 ILLUSTRATION TO THE OVERALL FRAMEWORK
In this paper, we plug CPC to the popular DivideMix framework. We delineate the overall training framework in Alg.1.
Algorithm 1 CPC based DivideMix 1: Input: Dataset D = (X,Y ), DNNs θ(1), θ(2), CPC with class prototypes C(1), C(2), clean
probability τ , CPC warm-up period e. 2: θ(1), θ(2) = WarmUp(X,Y, θ(1)), WarmUp(X,Y, θ(2)) //standard training to warm-up DNNs
3: while epoch < MaxEpoch do 4: // get GMM cleaners by loss distribution modeling and calculate clean/noise probability distribution 5: Q(2)(Z ′) =GMM(X,Y, θ(1)) 6: Q(1)(Z ′) =GMM(X,Y, θ(2)) 7: // calculate clean/noise probability distribution via CPC 8: Q(2)(Z) =CPC(X,Y, θ(1), C(1)) 9: Q(1)(Z) =CPC(X,Y, θ(2), C(2)) 10: for r ∈ {1, 2} do 11: // stage1 begin 12: XGMM(r) = {(xi, yi, wi)|wi = q(r)(z′i = 0), q(r)(z′i = 0) > τ, (xi, yi) ∈ D, q(r)(z′i = 0) ∈ Q(r)(Z ′ = 0)} 13: UGMM(r) = {xi|q(r)(z′i = 0) ≤ τ, xi ∈ X, q(r)(z′i = 0) ∈ Q(r)(Z ′ = 0)} 14: Get noise labels {yi|(xi, yi) ∈ D,xi ∈ UGMM(r)} 15: Update Ck based on Eq.5 16: // stage1 end 17: // stage2 begin 18: if epoch < e then 19: X (r) =XGMM(r),U (r) = UGMM(r) //use data partition from GMM cleaner to update DNNs during the CPC warm-up period 20: else 21: X (r) = {(xi, yi, wi)|wi = q(r)(zi = 0), q(r)(zi = 0) > τ, (xi, yi) ∈ D, q(r)(zi = 0) ∈ Q(r)(Z = 0)} 22: U (r) = {xi|q(r)(zi = 0) ≤ τ, xi ∈ X, q(r)(zi = 0) ∈ Q(r)(Z = 0)} 23: end if 24: Update θr based on Eq.11 as in standard DivideMix 25: // stage2 end 26: end for 27: epoch← epoch+ 1 28: end while Output: DNNs θ(1), θ(2) | 1. What is the main contribution of the paper regarding label cleaning and prototype classifiers?
2. What are the strengths and weaknesses of the proposed method, particularly in its dependence on representation quality and robustness?
3. Do you have any concerns regarding the experimental setting and metrics used to evaluate the method's performance?
4. Can you explain the purpose and adequacy of using pseudo-labels in the proposed method?
5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
6. Are there any errors or misunderstandings in the paper regarding statistical tests, distance metrics, and loss functions? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
This paper proposed a method for label cleaning based on a prototype classifier and combined it with DivideMix for learning from noisy labels. The method was evaluated on semi-synthetic datasets based on CIFAR-10/100, Clothing1M, and WebVision.
Strengths And Weaknesses
Class-dependent data cleaning may be a good direction and is worth further investigation. The prototype-based mislabeled instance detection is somehow novel.
However, there are several weaknesses:
The author claim that "the performance of unsupervised contrastive objectives highly depends on the quality of representation learned by the encoder" and "GMM cleaner can provide stronger and more robust supervision signals". There are a few issues: First, the author seems to imply that "depending on the quality of representation" is a drawback, but there is no evidence that the "prototypical contrastive objective" does not depend on the quality of representation anymore. Second, the "strongness and robustness" metric in this context is unclear. Nevertheless, the author provided no support for this claim.
The author did not explain Eq. (4). Why is pseudo-label needed, and why is this average strategy used?
I think the experimental setting with 90% symmetric noise is not a good proxy for real-world noisy label problems for two reasons: (1) the overall noise rate can rarely be as high as 90% in applications; (2) it is unlikely that the noise is instance-independent, let alone symmetric. Therefore, accuracy in such a setting may be a suboptimal metric and misleading for practitioners.
Section 5.3 cannot be called "analysis". The author only says things like, "we did this experiment, and we observed this outperformed that, so this is better". However, the mechanism of the proposed method is still unclear. For example, the author stated that the purpose of the results in Figure 3 is to "figure out why CPC brings such a significant performance improvement." Still, the author did not explain the reasons adequately and did not provide enough support. Moreover, the author seems to equate the final classification accuracy with the quality of prototypes. It is insufficient because the prototype is only one component of the learning method, and other aspects may also influence the final classification results.
The author also made several rookie mistakes:
It seems that the author does not understand the Kolmogorov-Smirnov test and p-values. Distribution does not have a p-value. A hypothesis test has a p-value. A p-value less than 0.05 does not mean the confidence that two distributions are different is higher than 95%. It means that the probability that two distributions are the same is less than 5%. Rejecting the null hypothesis does not mean supporting or refuting the alternative hypothesis.
The author misused the term loss to mean risk.
⟨
⋅
,
⋅
⟩
usually denotes an inner product, not a distance.
It seems that minimizing Eq. (2) does not minimize/maximize the distances (however they are defined).
Clarity, Quality, Novelty And Reproducibility
The writing and organization of this paper can be improved. The code (with questionable comments) provided by the author is poorly organized and painful to read, so I could not check if the implementation is consistent with what the author proposed in this paper. Therefore I did not check the reproducibility.
Other issues are listed below.
Adjectives like "impressive" or "outstanding" are subjective, unquantifiable, and inappropriate in scientific writing.
I do not think "noisy unlabeled set" is an appropriate term (Yes, I do not think its use in DivideMix is appropriate, either). Noisy data implies that the data is labeled.
About Figure 1:
How was epoch 30 and 10 chosen?
What distribution assumption was used here? PDF should be non-negative.
Clean = all correct. Noisy = correct + incorrect. Do you mean mislabeled examples here?
The caption of tables can be revised. What metric was reported? How many trials? What does the boldface mean (any statistical test used?), and is the result statistically significant? |
ICLR | Title
Class Prototype-based Cleaner for Label Noise Learning
Abstract
Semi-supervised learning based methods are current SOTA solutions to the noisylabel learning problem, which rely on learning an unsupervised label cleaner first to divide the training samples into a labeled set for clean data and an unlabeled set for noise data. Typically, the cleaner is obtained via fitting a mixture model to the distribution of per-sample training losses. However, the modeling procedure is class agnostic and assumes the loss distributions of clean and noise samples are the same across different classes. Unfortunately, in practice, such an assumption does not always hold due to the varying learning difficulty of different classes, thus leading to sub-optimal label noise partition criteria. In this work, we reveal this long-ignored problem and propose a simple yet effective solution, named Class Prototype-based label noise Cleaner (CPC). Unlike previous works treating all the classes equally, CPC fully considers loss distribution heterogeneity and applies class-aware modulation to partition the clean and noise data. CPC takes advantage of loss distribution modeling and intra-class consistency regularization in feature space simultaneously and thus can better distinguish clean and noise labels. We theoretically justify the effectiveness of our method by explaining it from the Expectation-Maximization (EM) framework. Extensive experiments are conducted on the noisy-label benchmarks CIFAR-10, CIFAR-100, Clothing1M and WebVision. The results show that CPC consistently brings about performance improvement across all benchmarks.
1 INTRODUCTION
Deep Neural Networks (DNNs) have brought about significant progress to the computer vision community over past few years. One key to its success is the availability of large amount of training data with proper annotations. However, label noise is very common in real-world applications. Without proper intervention, DNNs would be easily misled by the label noise and yield poor performance.
In order to improve the performance of DNNs when learning with noise labels, various methods have been developed (Liu et al., 2020; Li et al., 2020a; Reed et al., 2014; Nishi et al., 2021). Among them, semi-supervised learning based methods (Nishi et al., 2021; Li et al., 2020a) achieve the most competitive results. The semi-supervised learning methods follow a two-stage pipeline. They first model the loss distribution of training samples to construct a noise cleaner based on the “small-loss prior” (Han et al., 2020), which says in the early stage of training, samples with smaller crossentropy losses are more likely to have clean labels. The prior is widely adopted and demonstrated to be highly effective in practice (Han et al., 2020). Given the noise cleaner, the training samples are divided into a labeled clean set and an unlabeled noise set. Then, semi-supervised learning strategies like MixMatch (Berthelot et al., 2019) are employed to train DNNs on the divided dataset.
The key to their performance lies in the accuracy of the label-noise cleaner (Cordeiro et al., 2022). Usually, a single Gaussian Mixture Model (GMM) (Li et al., 2020a) is used to model the loss distribution of all the training samples across different categories. However, this modeling procedure is class-agnostic, which assumes a DNN model has the same learning speed to fit the training samples in different categories, thus the same loss value on samples in different categories can reflect the same degree of noise likelihood.
Unfortunately, such assumption does not hold in practise. In Fig. 1, we present the cross-entropy loss distribution of training samples at the end of DNNs warm-up period. We conduct Kolmogorov-
Smirnov test (Massey Jr, 1951) to quantify the loss distribution difference between the samples in each class and samples in the whole dataset. The results show that for 54% categories in CIFAR-100 under 90% symmetric noise, the p-value is lower than 0.051 for the hypothesis test that the probability distribution of clean samples in the class is the same with the probability distribution of clean samples in the whole dataset, while the number in the case of noise samples is 53%. Therefore, the class-agnostic label noise cleaner, which establishes a overly rigid criterion shared by all the classes, would introduce more noise samples to the clean set while reject clean samples, and consequently get the model perform poorly. A straightforward remedy to the problem is to fit distinct GMMs to losses of samples in different classes respectively, yielding a class-aware GMM cleaner. Nevertheless, this class-aware modeling strategy implicitly assumes that label noise is existed in every class. In the case of asymmetric noise e.g., CIFAR10-asym40%, where samples in parts of classes are clean, such a naive strategy would classify most of hard samples in the clean classes as noise, and results in negative affect on model training.
Considering that images in the same category should share similar visual representations, the similarity between a sample and the cluster center (e.g., class prototype) of its labeled class is helpful for recognizing label noise. In this paper, we propose a simple Class Prototype-based label noise Cleaner (CPC) to apply class-aware modulation to the partitioning of clean and noise data, which takes advantage of intra-class consistency regularization in feature space and loss distribution modeling, simultaneously. CPC learns embedding for each class, i.e., class prototypes, via intra-class consistency regularization, which urges samples in the same class to gather around the corresponding class prototype while pushes samples not belonging to the class away. Unlike the aforementioned naive class-aware GMM cleaner, CPC apply class-aware modulation to label noise partitioning via representation similarity measuring without assuming that label noise is existed in every class, which is more general for different label noise scenarios. Meanwhile, CPC leverages the “small-loss prior” to provide stronger and more robust supervision signals to facilitate the learning of prototypes.
We plug CPC to the popular DivideMix(Li et al., 2020a) framework, which iterates between label noise partitioning and DNNs optimization. With the stronger label noise cleaner in the first stage, DNNs can be trained better in the second stage, which would further improve the learning of class prototypes. We theoretically justify the procedure from Expectation-Maximization algorithm perspective, which guarantees the efficacy of the method. We conduct extensive experiments on multiple noisy-label benchmarks, including CIFAR-10, CIFAR-100, Clothing1M and WebVision. The results clearly show that CPC effectively improves accuracy of label-noise partition, and brings about consistently performance improvement across all noise levels and benchmarks.
The contribution of our work lie in three folds: (1) We reveal the long-ignored problem of classagnostic loss distribution modeling that widely existed in label noise learning, and propose a simple yet effective solution, named Class Prototype-based label noise Cleaner (CPC); (2) CPC takes advantage of loss distribution modeling and intra-class consistency regularization in feature space si-
1A p-value < 0.05 suggests the probability that the class-wise loss distribution are the same with the global loss distribution is lower than 5%.
multaneously, which can better distinguish clean and noise labels; (3) Extensive experimental results show that our method achieves competitive performance compared to current SOTAs.
2 RELATED WORK
Recent advances in robust learning with noisy labels can be roughly divided into three groups. (a) Label correction methods aim to translate wrong labels into correct ones. Early studies rely on an auxiliary set with clean samples for clean label inference (Xiao et al., 2015a; Vahdat, 2017; Li et al., 2017b; Lee et al., 2018). Recent efforts focus on performing label correction procedures without supervision regarding clean or noise labels. (Yi & Wu, 2019a; Tanaka et al., 2018) propose to jointly optimize labels during learning model parameters. Li et al. (2020b) propose to correct corrupted labels via learning class prototypes and utilize the pseudo-label generated by measuring the similarity between prototypes and samples to train model. Wu et al. (2021) and Li et al. (2021) introduce neighbouring information in feature space to correct noise label, and propose a graphbased method and a class prototype-based method, respectively. (b) Sample selection methods select potential clean samples for training to eliminate the effect of noise labels on learning the true data distribution. (Han et al., 2018; Jiang et al., 2018; 2020; Yu et al., 2019) involve training two DNNs simultaneously and focus on the samples that are probably to be correctly labeled. (c) Semisupervised learning methods conceal noise labels and treat these samples as unlabeled data (Ding et al., 2018). DivideMix (Li et al., 2020a) is a typical algorithm among these works, which compromises an unsupervised label noise cleaner that divides the training data to a labeled clean set and an unlabeled noise set, followed by semi-supervised learning that minimize the empirical vicinal risk of the model. Inspired by DivideMix, a series of methods (Cordeiro et al., 2022; Nishi et al., 2021; Cordeiro et al., 2021) are proposed, which achieve SOTA performance. However, all these methods rely on the class-agnostic loss distribution modeling to achieve the label noise cleaner, which hinders the performance of the model. The class-agnostic loss distribution modeling implicitly assumes a DNN model has the same learning speed to memory training samples in different categories. However, in reality, the memorization speed are actually different and will cause the the problem of under learning in hard classes as revealed by Wang et al. (2019). In this paper, we focuses on another problem, i.e., class agnostic loss distribution modeling problem caused by the issue in the context of label noise cleaner. In our method, we propose the simple yet effective class prototype-based label noise cleaner to solve the problem. Besides, compared to previous prototype-based label noise learning methods (Li et al., 2020b; 2021), our method are different from them in two folds: (1) we utilize prototypes as label noise cleaner to effectively improve the semi-supervised learning based methods; (2) CPC takes advantage of both loss distribution modeling and intra-class consistency regularization in feature space simultaneously which learns better prototypes.
3 PRELIMINARY
In label noise learning, given a training set D = (X,Y ) = {(xi, yi)}Ni=1, where xi is an image and yi ∈ {1, 2, ...,K} is the annotated label over K classes, the label yi could differ from the unknown true label ŷi. In this paper, we follow the popular label noise learning framework DivideMix (Li
et al., 2020a), which first warms up the model for a few epochs by training on all the data using the standard cross-entropy loss, and then trains the model by iterating a two-stage pipeline. The pipeline comprises an unsupervised label cleaner Q to divide training samples into a labeled set for clean data X and an unlabeled set for noise data U , followed by a semi-supervised learning stage that trains the model to minimise the empirical vicinal risk (EVR) (Zhang et al., 2017):
ℓEV R = 1 |X ′| ∑ X ′ ℓX ′(p(ỹ′i|x ′ i), y ′ i) + λ |U ′| ∑ U ′ ℓU ′(p(ỹ′i|x ′ i), y ′ i), (1)
where X ′ and U ′ indicate MixMatch (Berthelot et al., 2019) augmented clean and noise set. lX ′ and lU ′ denote the losses for samples in set X ′ and U ′, which are weighted by λ. p(ỹ′i|x′i) is the softmax output of DNNs, where ỹ′i is the predicted label. For more details about EVR, please refer to the appendix A.1.
In Li et al. (2020a), the unsupervised label cleaner is operated under the “small-loss prior”, which is widely adopted and demonstrated to be highly effective (Han et al., 2020). The prior assumes that in the early stage of training, samples with smaller cross-entropy losses are more likely to have clean labels. The well known insight behind the “small-loss prior” is that DNNs tend to learn simple patterns first before fitting label noise (Arpit et al., 2017). Given a training sample xi and the softmax output p(ỹi|xi) of DNNs, where ỹi is the predicted label, the cross-entropy loss l(p(ỹi|xi), yi) reflects how well the model fits the training sample.
To achieve the unsupervised label cleaner Q, a two-component Gaussian Mixture Model (GMM) is employed to fit the loss distribution of all training samples, i.e., ℓ(p(ỹi|xi), yi) ∼ ϕ0N (µ0, σ0) + ϕ1N (µ1, σ1), where µ0 < µ1, and ϕ is a mixing coefficient. The component with smaller mean represents the distribution of clean samples and the other one is for noise samples. We use zi ∈ {0, 1} indicates the data is clean or not. Then, q(zi = 0) represents the clean probability of xi, which is the posterior probability of its loss belonging to the clean component. The label cleaner is shared by training samples across different classes, which is actually class-agnostic. A hypothesis implicitly accompanying this loss distribution modeling method is ignored by current works, which assumes the loss distributions of clean and noise samples are consistent across different categories. Unfortunately, as illustrated in Fig.1, the hypothesis dose not hold in practise. In this paper, we propose the class prototype-based label noise cleaner which applies class-aware modulation to the partitioning of clean and noise data and improves label noise learning.
4 METHODOLOGY
4.1 OVERVIEW
Our method follows the two-stage label noise learning framework DivideMix (Li et al., 2020a) and improves the framework with the proposed CPC. CPC comprises class prototypes C = {ck ∈ R1×d|k = 1, 2, ...,K}, where ck indicates the prototype of k-th class and d is the dimension of prototype embedding. Our DNN model consists of a CNN backbone, a classifier head and a projection layer. The backbone maps an image input xi to a feature vector vi ∈ R1×D. The classifier takes vi as input and outputs class prediction p(ỹi|xi). The projection layer serves to project the high dimension feature vi to a low-dimensional embedding v′i ∈ R1×d, where d < D. As shown in Fig. 2, we update the DNN as well as the CPC by iterating a two-stage training pipeline in every epoch. In the first stage, we update CPC as well as the projector in DNN, and utilize the updated CPC to partition label noise. We first calculate the cross-entropy loss of every training sample and fits a GMM to the losses. We utilize the GMM as a label noise cleaner to get a labeled clean set XGMM and a unlabeled noise set UGMM . The data partition XGMM and UGMM are utilized to update the prototypes in CPC and parameters in the projector. Note that we cut off the gradient back-propagation from the projector to the CNN backbone. Then, the updated CPC is employed to re-divide the training data into another two set X and U . In the second stage, we train DNN model to minimise the EVR in Eq. (1) with data partitioned by the cleaner. In the first e epochs, we wait CPC to warm up, and minimise the EVR of DNNs based on training data partitioned by the GMM cleaner. After the e-th epoch, the label noise estimation results of CPC, i.e., X and U are employed to train DNNs, while the estimation results of GMM cleaner are only used to update
prototypes in CPC. In inference, we utilize DNN classifier for image recognition, directly. In A.5, we further delineate the full framework.
4.2 CLASS PROTOTYPE-BASED LABEL NOISE CLEANER
In order to apply class-aware modulation to the label noise partitioning, we propose to learn an embedding space where samples from the same class are aligned with their class prototypes, and leverage the prototypes to recognize noise labels. The prototypes are typically learnt with intra-class consistency regularization, which urges samples in the same class to align with the corresponding class prototype while keeping samples not belonging to the class away. Previous methods (Wang et al., 2022; Li et al., 2020b) apply the intra-class consistency regularization to prototype learning via unsupervised contrastive objectives, e.g., prototypical contrastive objective (Li et al., 2020c), where the unsupervised training labels are typically determined by the similarity between samples and prototypes. The accuracy of the training labels are highly depends on the quality of representation learnt by the CNN encoder, which would be too low to effectively update the prototypes, especially in the early stage of training. In contrast, we empirically find that the GMM cleaner, which is operated under the well evaluated “small-loss prior”, are not as sensitive as the prototypes to the representation quality, and can provide more robust and accurate training labels.
Therefore, we propose to take samples in clean set XGMM as positive samples and those in noise set UGMM as negative samples to update prototypes. Specifically, given the feature embedding v′i of a sample xi from XGMM, we update prototypes C as well as the parameters of the projector to maximize the score q(zi = 0) between ck=yi and v ′ i, and minimize the score between ck ̸=yi and v ′ i via minimize LXGMM :
LXGMM = − 1 |XGMM | ∑
XGMM K∑ k=1 ℓk(v ′ i, yi), where
ℓk(v ′ i, yi) =
{ log(sigmoid(v′ic ⊤ k )), k = yi,
λneg log(1− sigmoid(v′ic⊤k )), k ̸= yi,
(2)
where λneg = 1K weights the losses between positive pair and negative pairs to avoid under-fitting the positive samples. Given v′i of a sample xi from UGMM, we update prototypes ck as well as the parameters of the projector to minimize the score q(zi = 0) between ck=yi and v ′ i via minimizing LUGMM :
LUGMM = − 1 |UGMM | ∑
UGMM log(1− sigmoid(v′ic⊤k )), where k = yi. (3)
At last, for noise samples in UGMM with high classification confidence, the samples are more likely to belong to the class predicted by DNNs, which is potentially valuable to the update of prototypes. Therefore, we collect such training samples XP from UGMM taking the averaged classification confidence of samples in XGMM as the threshold. Specifically, given a sample in UGMM with the label predicted by DNNs k = maxk(p(ỹi|xi)), the sample is collected into XP if p(ỹi|xi)k > average({p(ỹi|xi)k|(xj , yj |yj = k) ∈ XGMM}). Then, we update the prototypes and projectors to minimize L(XP ):
LXP = − 1 |XP | ∑ XP log(sigmoid(v′ic ⊤ k )),where k = max k (p(ỹi|xi)). (4)
The overall empirical risk LC for prototypes and the projector is as follows: LC = LXGMM + LUGMM + αLXP , (5) where α is the weight scalar.
CPC distinguishes a clean sample (xi, yi) with the score q(zi = 0) = sigmoid(v′ic ⊤ k=yi ) and the threshold τ . Samples with q(zi = 0) > τ are classified as clean, and otherwise as noise.
4.3 THEORETICAL JUSTIFICATION ON THE EFFICACY OF CPC
We provide theoretical justification on the efficacy of CPC from the perspective of ExpectationMaximization algorithm, which guarantees that though CPC does not follow the classical prototypical contrastive objective, it can still learn meaningful prototypes and act as an effective cleaner.
We consider training data with label noise D = (X,Y ) = (xi, yi) N i=1 as the observable data, and Z ∈ {0, 1}N as the latent variable, where zi = 0 iff (xi, yi) is clean (i.e., yi = ŷi). The prototypes C in the cleaner are taken as parameters expected to be updated. Then, the negative log likelihood for D given C is as follows:
NLL(D|C) = − ∑ D log ∑
zi∈{0,1}
p(xi, yi, zi|C) = − ∑ D log ∑
zi∈{0,1}
q(zi) p(xi, yi, zi|C)
q(zi) , (6)
where q(zi) = p(zi|xi, yi, C). According to the Bayes theorem and Jensen’s inequality , we have NLL(D|C) = − ∑ D log ∑
zi∈{0,1}
q(zi)p(xi, yi|C),
≤ − ∑ D ∑ zi∈{0,1} q(zi) log p(xi, yi|C)
= − ∑ D ∑ zi∈{0,1} q(zi) log p(yi|C, xi) + const,
(7)
where − ∑
D ∑ zi∈{0,1} q(zi) log p(yi|C, xi) is the upper bound of NLL(D|C). Typically, we can
adopt the EM algorithm to find the prototypes C that minimize the upper bound by iterating:
E-step: Compute a new estimate of q(zi) (i.e., clean or noise) according to prototypes Cold from the last iteration: q(zi) = p(zi|xi, yi, Cold). (8) M-step: Find the prototypes C that minimizes the bound:
Cnew = argmin C − ∑ D ∑ zi∈{0,1} q(zi) log p(yi|C, xi) (9)
In our method, in order to introduce the “small-loss prior” to provide stronger and more robust supervision signals to the learning of CPC, in the E-step, we estimate the distribution of clean or noise of samples, denoted as q(z′i), via the GMM cleaner instead of q(zi) in Eq. (8). And consequently, we replace the q(zi) in Eq. (9) to q(z′i) and find the prototype C minimize the bound. Next, we provide the justification that the EM algorithm still work by proving that q(z′i) can be considered as an approximation to q(zi) in our framework.
In our method, q(z′i) = p(z ′ i|l(p(ỹi|xi), yi)), where ỹi ∼ p(ỹi|xi, θ), which is the label predicted by the DNN parameterized by θ. As introduced in section 4.1, in the first stage of each epoch, the CPC’s estimation results zi ∼ q(zi) are utilized to divide training samples into a labeled set for clean data X = {(xi, yi)|zi = 0} and an unlabeled set for noise data U = {(xi, yi)|zi = 1}. Then the parameters of DNNs, which we denote as θ, are optimized using Eq. (1) in the second stage. There exists an optimal θ∗ with respected to zi, with which the softmax output p(ỹi|xi) of DNNs satisfies:
ℓ(p(ỹi|xi), yi) = 0, if zi = 0, otherwise 1, (10) where ℓ(p(ỹi|xi), yi) is the cross-entropy loss between the network prediction and the annotated label. With these loss values, the subsequent GMM cleaner can easily distinguish samples of X from samples of U . In other words, under the optimal θ∗, the estimation of the GMM cleaner would be consistent with the partition of CPC, i.e., z′i = zi. In practice, in each epoch, we takes the θ optimized to minimize Eq. (1) as an approximation to the optimal θ∗ with respect to zi, and consequently we can get q(z′i) as an approximation to q(zi). Therefore, we can see that with the “small loss prior” introduced into the prototype learning, the EM optimization procedure would still work, which guarantees CPC can learn meaningful prototypes and act as an effective cleaner. In appendix A.4, we further present more details and empirical results to demonstrate the approximation is hold in practice.
5 EXPERIMENTS
5.1 DATASETS AND IMPLEMENTATION DETAILS
Datasets. We evaluate our method on the following popular LNL benchmarks. For CIFAR-10 and CIFAR-100 (Krizhevsky et al., 2009), we experiment with two types of synthetic noise: symmetric
and asymmetric, which are injected into the datasets following the standard setup in (Li et al., 2020a). Clothing1M (Xiao et al., 2015b) and WebVision1.0 (Li et al., 2017a) are two large-scale real-world label noise benchmarks. Clothing1M contains 1 million images in 14 categories acquired from online shopping websites, which is heavily imbalanced and most of the noise is asymmetric (Yi & Wu, 2019b). WebVision1.0 contains 2.4 million images crawled from the web using the concepts in ImageNet-ILSVRC12 (ILSVRC12). Following convention, we compare with SOTAs on the first 50 classes of WebVision, as well as the performance after transferring to ILSVRC12.
Implementation details. We plug the proposed CPC to the DivideMix (Li et al., 2020a) framework. For Clothing1M and CIFAR-10 with asymmetric noise, we employ a single class-agnostic GMM for loss-distribution modeling. For other cases, we find that class-aware GMMs would further improve the performance of CPC. Following DivideMix, we employ ResNet18 (He et al., 2016) for CIFAR10 and CIFAR-100, and utilize ImageNet pre-trained ResNet-50 for Clothing1M. Since previous works chose different backbones, e.g., Inception-resnet v2 (Szegedy et al., 2017) and ResNet-50, we adopt the weaker one, i.e., ResNet-50 according to (Zheltonozhskii et al., 2021), and train it from scratch for fair comparison. The threshold of CPC τ is set 0.5 by default for all the datasets except for the extremely imbalanced Clothing1M where it is set to 0.3. For CIFAR-10 and CIFAR100, we train the models for 450 epochs. For the large-scale dataset Clothing1M and WebVision1.0, we train the model for 80 and 100 epochs, respectively. The warm-up periods of prototypes for all the datasets is set to the first 5% epochs after network warm-up, except in CIFAR-100 with noise ratios larger than 80% when set to 10% of total epochs. For the other settings, we simply follow the standard set-up as in DivideMix. For more implementation details, please refer to the appendix A.2 and codes in supplementary materials.
5.2 COMPARISON WITH STATE-OF-THE-ART METHODS
Real-world noise benchmarks. We evaluate our method on real-world large scale data sets, and compare our method with latest SOTA label noise learning methods, including DivideMix(Li et al., 2020a), LongReMix(Cordeiro et al., 2022), NGC(Wu et al., 2021), GJS(Englesson & Azizpour, 2021), ELR+(Liu et al., 2020), AugDMix(Nishi et al., 2021) and NCR(Huang et al., 2021). For WebVision, we measure the top1 and top5 accuracy on WebVision validation set and ImageNet ILSVRC12 validation set. We take ResNet50-based DivideMix (Zheltonozhskii et al., 2021) as baseline. As shown in Table 1, our CPC improves top1 and top5 accuracy over baseline model on WebVision by 3.33% and 2.81%, respectively. Our method achieves competitive performance on WebVision, and shows stronger transferable capability, outperforming other competitors on the ILSVRC12 validation set significantly. For Clothing1M, we apply the strong augmentation strategy (Nishi et al., 2021) to DivideMix as our baseline, and rerun the method three times. Our method achieves 75.4% accuracy on this challenging benchmark, outperforming all the other SOTAs. We also notice that though NCR achieves SOTA result on WebVision, it shows moderate performance compared to ELR+, DivideMix and AugDMix on Clothing1M containing asymmetric noise with imbalanced data distribution. It reveals that our method could be more robust across different label noise scenarios.
Synthetic noise benchmarks. We evaluate the performance of CPC on CIFAR-10 and CIFAR-100 datasets with symmetric label noise level ranging from 20% to 90% and asymmetric noise of rate 40%. We take AugDMix as the baseline, and compare our method with latest SOTA methods, where DivideMix, LongReMix and Aug-DMix are semi-supervised learning based methods. Following NGC and GJS, we run our method three times with different random seeds and report the mean and standard deviation. For other methods, e.g., ProtoMix (Li et al., 2021), we report the best results reported in their papers. As shown in Table 2, though with a baseline method as strong as AugDMix, our method brings about performance improvement across all noise levels as well as noise types consistently, and establishes new SOTAs on CIFAR-10 and CIFAR-100. Additionally, we notice that, under asymmetric noise set-up, semi-supervised learning based methods consistently outperform other methods that achieve SOTA results on WebVision benchmark,including NGC, GJS and NCR. The results reveal that semi-supervised learning based method could be more robust to asymmetric noise, while our method achieves SOTA performance among them.
5.3 ANALYSIS
Is CPC a better label noise cleaner? We evaluate the performance of label noise cleaner under both symmetric and asymmetric label noise set-ups. For symmetric noise, we use CIFAR-100 with 90% noise as benchmark to reveal the relationship between CPC and the significant performance improvement under this set-up. For asymmetric noise, we employ the most commonly adopted CIFAR-10-asym40% as benchmark. The AUC of clean/noise binary classification results of a cleaner is calculated as the evaluation metric. We take the original class-agnostic GMM cleaner (GMMagn) proposed in DivideMix as baseline, and compare it to our CPC and the aforementioned naive class-aware GMM cleaner (GMMawr). Furthermore, we also implement another version of CPC that trained based on the class-aware GMM cleaner. To distinguish these two CPC, we denote the regular one trained based on conventional class-agnostic GMM cleaner as CPCagn, and the other one as CPCawr. As shown in Figure 3, in both cases, the regular CPCagn outperforms the baseline GMMagn as well as GMMawr, which demonstrates our class prototype-based method is the better label noise cleaner. As for the comparison between GMMagn and GMMawr, we find that in the situation of high symmetric noise, though GMMagn shows better performance in the early stage of training, GMMawr outperforms it in the second half stage of training. In the case of asymmetric noise, GMMawr, which tend to classify hard clean samples in clean categories as noise wrongly, consistently underperforms GMMagn across the whole training period. The results further prove that our class prototype-based method is the better choice for applying class-aware modulation to label noise cleaning, which is more robust across different noise types. Moreover, we find that in the case of asymmetric noise, CPCagn achieves higher AUC compared to GMMagn, which shows our method can partially make up for the shortcomings of GMMagn. In the case of symmetric noise, we find that GMMagn can further improve the performance of CPC, where CPCawr achieves the best performance among the four cleaners.
How do different label noise cleaners affect label noise learning? We plug different cleaners to DivideMix framework, and keep all the other training settings the same as described in the implementation details. As shown in Table 3, the final performance of the model is consistent with the performance of the cleaner used. On CIFAR-100 with 90% symmetric noise, performance improvement bought about by CPCagn are 7.68%, while model with CPCawr outperforms the baseline method by 13.4%. We also report the comparison results on large-scale WebVision dataset, where the performance of different models show the same trend of change as in CIFAR-100-sym90%. As for the asymmetric noise situation, i.e., CIFAR-10-asym40% and Clothing1M, model with CPCagn, which has superior label noise partitioning capability as shown in Fig.3, achieves best performance while CPCawr beat GMMawr in both cases. The results demonstrate that CPC is helpful to train a better model in label noise learning.
Is the GMM cleaner beneficial to the learning of prototypes? In our method, we propose to leverage the GMM cleaner to facilitate the learning of prototypes via the “small loss prior”. To validate the effectiveness of our method, we first compare the quality of prototypes learnt in CPC with prototypes learnt in another prototype-based label noise learning method MoPro (Li et al., 2020b). We take WebVision as benchmark and utilize prototypes to classify test samples via measuring the similarity between samples and prototypes. The results show that, on the first 50 classes of WebVision, our prototype achieves a top1 accuracy of 78.44%, while MoPro’s accuracy is 72.23%, which demonstrates that our method is able to learn better prototypes. To further verify the contribution of the GMM cleaner, we remove the GMM cleaner and learn class prototypes in CPC via the typical prototypical contrastive objective as in MoPro. In experiments, we find that without the help of the GMM cleaner, the learnt prototypes generate less accurate data partition that further drawing back the overall training framework for DNNs, which proves the benefits of the GMM cleaner to our method. For more details and discussion, please refer to A.3.
6 CONCLUSION
In this paper, we reveal the long-ignored problem of class-agnostic loss distribution modeling that widely existed in label noise learning, and propose a simple yet effective solution, named Class Prototype-based label noise Cleaner (CPC). CPC takes advantage of loss distribution modeling and intra-class consistency regularization in feature space simultaneously, which can better distinguish clean and noise labels. We justify the effectiveness of our method by explaining it from the EM algorithm perspective theoretically and providing extensive empirical proves. The experimental results show that our method achieves competitive performance compared to current SOTAs.
A APPENDIX
A.1 EMPIRICAL VICINAL RISK
We introduce the Empirical Vicinal Risk following Cordeiro et al. (2022). In the semi-supervised learning based label noise learning framework, with the labeled set X and unlabeled set U from a cleaner, the DNNs are trained to minimise the empirical vicinal risk (EVR) (Zhang et al., 2017):
ℓEV R = 1 |X ′| ∑ X ′ ℓX ′(p(ỹ′i|x ′ i), y ′ i) + λ(U ′) |U ′| ∑ U ′ ℓU ′(p(ỹ′i|x ′ i), y ′ i), (11)
where lX ′ and lU ′ denote the losses for set X ′ and U ′, which are weighted by λ(U ′). X ′ and U ′ indicate MixMatch (Berthelot et al., 2019) augmented clean and noise set:
X ′ = (x′i, y′i) : (x′i, y′i) ∼ f(x′i, y′i|xi, yi), (xi, yi) ∈ X , U ′ = (x′i, y′i) : (x′i, y′i) ∼ f(x′i, y′i|xi, yi), (xi, yi) ∈ U ,
(12)
with
f(x′i, y ′ i|xi, yi) =
1 |X ∪ U| ∑ X∪U Eλ[δ(x′i = λxi + (1− λ)xj , y′i = λyi + (1− λ)yj)], (13)
where δ is a Dirac mass centered at (x′, y′), λ ∼ Beta(a, a), and a ∈ (0,+inf).
A.2 OTHER TRAINING DETAILS
A.2.1 TRAINING CONFIGURATIONS
In our method, we follow most of training set-up of DivideMix(Li et al., 2020a). We present the detailed training configures as follows:
• CIFAR-10 and CIFAR-100. For all the experiments on CIFAR, we train our DNN model as well as class prototypes in CPC via SGD with a momentum of 0.9, a weight decay of 0.0005, and a batch size of 128. The network is trained for 450 epochs. We set the initial learning rate as 0.02, and reduce it by a factor of 10 after 225 epochs. The warm up period for the DNN is 10 epochs. The weight λ(U
′) is set to {0,25,50,150} as in DivideMix. • Clthing1M. We train our DNN model as well as class prototypes in CPC via SGD with a
momentum of 0.9, a weight decay of 0.001, and a batch size of 32. The model is trained for 80 epochs. The warm up period for the DNN is 1 epoch. The initial learning rate is set as 0.002 and reduced by a factor of 10 after 40 epochs. For each epoch, we sample 1000 mini-batches from the training data. The weight λ(U ′) is set to 0.
• WebVision. We train our DNN model as well as class prototypes in CPC via SGD with a momentum of 0.9, a weight decay of 0.001, and a batch size of 32. The model is trained for 100 epochs. The warm up period for the DNN is 1 epoch. The initial learning rate is set as 0.01 and reduced by a factor of 10 after 50 epochs. For each epoch, we sample 1000 mini-batches from the training data. The weight λ(U ′) is set to 0.
A.2.2 HYPER-PARAMETER STUDY
In this paper, we mainly follow the tuning procedure as in DivideMix to determine the newly introduced hyper-parameters. First of all, we initialize the hyper-parameters to e = 5%, τ = 0.5, α = 1.
Then, for the large scale real world benchmark Clothing1M and WebVision, the hyper-parameter tuning is done on the validation set of Clothing1M and transferred to WebVision. For CIFAR, a small validation set with clean data is split from training data for hyper-parameter tuning. Due to the diversity of experimental set-ups, it would be an irritating task to tune hyper-parameters for each experimental set-up, respectively. Therefore, we only tune the hyper-parameters under CIFAR-100(sym80%) and CIFAR-100(sym50%), and transfer the hyper-parameters obtained under CIFAR-100(sym80%) to the noisier set-up i.e., CIFAR-100(sym90%), and those obtained under CIFAR-100(sym50%) to the less challenge set-ups i.e., noise ratio lower than 50% and all noise ratio on CIFAR-10.
In practical, when a clean validation set is inaccessible, it would be the difficult to tune the hyperparameters. To shed some light to the hyper-parameter set-up in these cases, we try to conclude some empirical solutions via studying the variation of performance of CPC with respect to the newly introduced hyper-parameters on different benchmarks. According to experimental results, we find that CPC is robust in the choice of hyperparameters in the range listed in Tab.4. Generally, e = 5%/10%, τ = 0.5, α = 0/1 can be a good choice in most cases.
A.3 DISCUSSION ON THE CONTRIBUTION OF GMM CLEANER TO CPC
In typical prototypical contrastive objective, the unsupervised training labels are determined by similarity between samples and prototypes. Compared to it, we empirically find that GMM cleaner provides more accurate training labels for prototypes, especially in the early stage of training. For example, in CIFAR-10(asym-40%), the averaged accuracy of training labels from GMM cleaner is 9.7% higher during the CPC warming up period.
To evaluate the contribution of GMM cleaner in our framework, we further present ablation study results in Tab. 5. For CPC w/o GMM Cleaner, we remove the GMM cleaner and learn class prototypes in CPC with prototypical contrastive objective as in MoPro (Li et al., 2020b). In experiments, we find that without the help of the GMM cleaner, the learnt prototypes generate less accurate data partition that further drawing back the overall training framework for DNNs as shwon in Tab. 5. The situation is especially severe on the challenging benchmark with more diverse data, e.g., WebVision. The results demonstrate the benefits of the GMM cleaner in our method.
To prove the superiority of our method, we also compare the quality of prototypes learnt in our method with prototypes learnt in MoPro (Li et al., 2020b) on the first 50 classes of WebVision. To evaluate the quality of prototypes learnt in CPC, we utilize the prototypes to classify test samples via measuring the similarity between samples and prototypes. We implement the experiment with the official code released by the MoPro team. The results show that our prototype achieves a top1 accuracy of 78.44%, while MoPro’s accuracy is 72.23%. The result demonstrates that our method is able to learn better prototypes.
A.4 SUPPLEMENTARY DISCUSSION ON THE THEORETICAL JUSTIFICATION
A.4.1 IS q(z′i) A PROPER APPROXIMATION TO q(zi) IN PRACTICAL?
In Section 4.3, we replace the estimation of CPC q(zi) in Eq. (9) with the estimation of GMM cleaner q(z′i) and justify q(z ′ i) can be considered as an approximate to q(zi). To investigate if the approximation holds in practical, we calculate the K-L Divergence as well as classification consistency between q(z′i) and q(zi). As shown in Figure 4, as the training going on, the KLD between q(z′i) and q(zi) is converged and the classification consistency increases.
A.4.2 TRAINING PROTOTYPES WITH LC IS AN APPROXIMATION TO THE M-STEP IN EM
As illustrated in Section 4.3, in order to introduce the “small-loss prior” to provide stronger and more robust supervision signals to the learning of CPC, in the E-step, we estimate the probability distribution of clean or unclean of samples, denoted as q(z′i), via the GMM cleaner, which is an approximation to the q(zi) in Eq. (8). And consequently, we replace the q(zi) in Eq. (9) with q(z′i) and find the prototype C to minimize the bound, which makes the loss function LC in Eq. (5) an approximation to Eq. (9). The detailed analysis on the relationship between Eq. (5) and Eq. (9) is as follows.
Firstly, we replace the estimation of CPC q(zi) in Eq. (9) with the estimation of GMM cleaner q(z′i) which is a justified approximate to q(zi):
Cnew = argmin C − ∑ D ∑ zi∈{0,1} q(zi) log p(yi|C, xi)
≈ argmin C − ∑ D ∑ z′i∈{0,1} q(z′i) log p(yi|C, xi)
= argmin C − ∑ D [q(z′i = 0) log p(yi|C, xi) + q(z′i = 1) log p(yi|C, xi)]
(14)
In Eq. (5), q(z′i) is quantified to 1 and 0 by the threshold τ , which makes it a “hard” version to Eq. (14). Specifically, the first term in Eq. (14) updates the prototypes C to better align the samples, that classified as clean, with labeled class prototypes. It is equivalent with the effect of Eq. (5) to positive samples, where:
l = log(sigmoid(v′ic ⊤ k )), k = yi, z ′ i = 0 (15)
where v′i is the embedding of sample xi. The second term in Eq. (14) updates C to prevent the samples, that classified as noise, aligning with labeled class prototypes so as to better recognize the sample as noise (i.e., z′i = 1), which is equivalent with the effect of Eq. (5) reducing the probability of negative samples to be recognized as clean:
l = log(1− sigmoid(v′ic⊤k )), k = yi, z′i = 1 (16)
A.5 ILLUSTRATION TO THE OVERALL FRAMEWORK
In this paper, we plug CPC to the popular DivideMix framework. We delineate the overall training framework in Alg.1.
Algorithm 1 CPC based DivideMix 1: Input: Dataset D = (X,Y ), DNNs θ(1), θ(2), CPC with class prototypes C(1), C(2), clean
probability τ , CPC warm-up period e. 2: θ(1), θ(2) = WarmUp(X,Y, θ(1)), WarmUp(X,Y, θ(2)) //standard training to warm-up DNNs
3: while epoch < MaxEpoch do 4: // get GMM cleaners by loss distribution modeling and calculate clean/noise probability distribution 5: Q(2)(Z ′) =GMM(X,Y, θ(1)) 6: Q(1)(Z ′) =GMM(X,Y, θ(2)) 7: // calculate clean/noise probability distribution via CPC 8: Q(2)(Z) =CPC(X,Y, θ(1), C(1)) 9: Q(1)(Z) =CPC(X,Y, θ(2), C(2)) 10: for r ∈ {1, 2} do 11: // stage1 begin 12: XGMM(r) = {(xi, yi, wi)|wi = q(r)(z′i = 0), q(r)(z′i = 0) > τ, (xi, yi) ∈ D, q(r)(z′i = 0) ∈ Q(r)(Z ′ = 0)} 13: UGMM(r) = {xi|q(r)(z′i = 0) ≤ τ, xi ∈ X, q(r)(z′i = 0) ∈ Q(r)(Z ′ = 0)} 14: Get noise labels {yi|(xi, yi) ∈ D,xi ∈ UGMM(r)} 15: Update Ck based on Eq.5 16: // stage1 end 17: // stage2 begin 18: if epoch < e then 19: X (r) =XGMM(r),U (r) = UGMM(r) //use data partition from GMM cleaner to update DNNs during the CPC warm-up period 20: else 21: X (r) = {(xi, yi, wi)|wi = q(r)(zi = 0), q(r)(zi = 0) > τ, (xi, yi) ∈ D, q(r)(zi = 0) ∈ Q(r)(Z = 0)} 22: U (r) = {xi|q(r)(zi = 0) ≤ τ, xi ∈ X, q(r)(zi = 0) ∈ Q(r)(Z = 0)} 23: end if 24: Update θr based on Eq.11 as in standard DivideMix 25: // stage2 end 26: end for 27: epoch← epoch+ 1 28: end while Output: DNNs θ(1), θ(2) | 1. What is the main contribution of the paper regarding learning with noisy labels?
2. What are the strengths and weaknesses of the proposed method compared to prior works?
3. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
4. Are there any questions or concerns regarding the theoretical analysis and experimental results?
5. Is there any confusion regarding the implementation of the method and its relationship with other approaches? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
This paper presents a work on learning with noisy labels. The class-agnostic sample selection approach is improved through class prototypes. Based on a prior framework designed for handling noisy labels, extensive experiments demonstrate that the proposed method of this paper achieves competitive classification performance on multiple tasks.
Strengths And Weaknesses
Strengths
Experimental results are abundant, which shows that the proposed method is a state-of-the-art one.
Weaknesses
The contribution of this paper is overclaimed. Besides, the technical contribution is weak.
Theoretical analysis is not convincing and likely to be problematic.
Writing needs to be improved. It is hard to follow the implementation of the method. Also, lots of claims/explanations/descriptions are confusing.
Clarity, Quality, Novelty And Reproducibility
Clarity & Quality
There are many unclear explanations in this paper. The quality is not satisfactory. Specifically,
The paper overclaimed its contribution. It claims that "this paper first reveal this long-ignored problem". It is not true. Prior works [R1] (see its Figure 1) actually have discussed the difference of multiple classes about data memorization during training.
The class-agnostic issue is easy to understand. However, the Kolmogorov-Smirnov test instead makes readers confused.
The paper argues that the GMM method for sample selection is class-agnostic, and therefore we improve it. However, this method is still involved in the proposed framework, which is a bit contradictory. Besides, if the errors brought by this GMM method are accumulated, the proposed framework will be influenced largely.
What is the relationship between the EM algorithm and used implementation? It seems that the paper uses Eq. (5) in optimization but does not use the EM algorithm. If so, why the theoretical analysis is conducted with EM? Besides, the theoretical analysis does not export a useful result. The paper only simply claims that the obtained loss values can be used effectively for sample selection, which is not rigorous for a theory. At least, the convergence of EM and results obtained after convergence should be provided.
An algorithm flow could be provided to better follow the implementation of the framework.
One minor comment. Noisy labels are not incorrect labels. They consist of both clean and incorrect labels.
[R1] Yisen Wang et al. Symmetric Cross Entropy for Robust Learning with Noisy Labels. In ICCV, 2019.
Novelty
The novelty of this paper is limited. The motivation of this paper is good, which targets the class-agnostic issue of existing sample selection methods. However, the framework implementation is a bit rough. It seems that the paper simply combines unsupervised learning methods and semi-supervised learning methods for handle noisy labels to improve the network robustness. Both conceptual and technical novelty cannot meet the requirements of a top-tier conference.
Reproducibility
The reproducibility of this paper may be not satisfactory. Although the paper provides some implementation details of the proposed framework, too many hyper-parameters are included and need to be determined. In fact, the DivideMix framework has contained many hyper-parameters, the proposed framework additionally introduces
α
,
e
, and
τ
. These hyper-parameters cannot be determined through standard machine learning paradigms such as (cross-) validation and grid search methods. Therefore, the reproducibility may be not good enough. One suggestion is that the paper could provide all needed hyper-parameters and carefully discuss how to tune them in practice, i.e., only with training and validation sets. |
ICLR | Title
Architecture-Agnostic Masked Image Modeling -- From ViT back to CNN
Abstract
Masked image modeling (MIM), an emerging self-supervised pre-training method, has shown impressive success across numerous downstream vision tasks with Vision transformers (ViTs). Its underlying idea is simple: a portion of the input image is randomly masked out and then reconstructed via the pre-text task. However, the working principle behind MIM is not well explained, and previous studies insist that MIM primarily works for the Transformer family but is incompatible with CNNs. In this paper, we first study interactions among patches to understand what knowledge is learned and how it is acquired via the MIM task. We observe that MIM essentially teaches the model to learn better middle-order interactions among patches and extract more generalized features. Based on this fact, we propose an Architecture-Agnostic Masked Image Modeling framework (AMIM), which is compatible with both Transformers and CNNs in a unified way. Extensive experiments on popular benchmarks show that our AMIM learns better representations without explicit design and endows the backbone model with the stronger capability to transfer to various downstream tasks for both Transformers and CNNs.
1 INTRODUCTION
Supervised deep learning with large-scale annotated data has witnessed an explosion of success in computer vision (CV) (Krizhevsky et al., 2012a; He et al., 2016) and natural language processing (NLP) (Vaswani et al., 2017). However, a large number of high-quality annotations are not always available in real-world applications. Learning representations without supervision by leveraging pre-text tasks has become increasingly popular.
In CV, early self-supervised learning approaches (Zhang et al., 2016; Doersch et al., 2015; Gidaris et al., 2018) aim to capture invariant features through predicting transformations applied to the same image. However, these methods rely on vision ad-hoc heuristics, and the learned representations are less generic for downstream tasks. Recently, contrastive learning-based approaches (Tian et al., 2020; Chen et al., 2020b; He et al., 2020) have witnessed significant progress, even outperforming supervised methods on several downstream tasks (Chen et al., 2020c; Grill et al., 2020; Zbontar et al., 2021). More recently, inspired by masked autoencoding methods (Radford et al., 2018; Devlin et al., 2018) in NLP, Masked Image Modeling (MIM) methods (Bao et al., 2022; He et al., 2022; Wei et al., 2021; Xie et al., 2021b) have brought about new advances for self-supervised pre-training on CV tasks. The transition from human language understanding to NLP masked autoencoding is quite natural because the filling of missing words in a sentence requires relatively comprehensive semantic understanding. In analogy, humans can understand and imagine masked content by visually filling the missing structures in an image containing occluded parts.
Different from contrastive learning, which yields a clustering effect from pre-training by pulling similar samples and pushing away dissimilar samples, MIM pre-training methods have not been extensively explored in the context of the expected knowledge learned or how this knowledge is acquired. Existing works mainly focus on improving downstream tasks performance via explicit design such as trying different prediction target (Wei et al., 2021), adopting pre-trained tokenizer (Zhou et al., 2021), utilizing complex Transformer decoder (He et al., 2022) or combining with contrastive learning (El-Nouby et al., 2021). Moreover, the success of existing MIM methods is largely confined to Vision Transformer (ViT) structures (Dosovitskiy et al., 2021) since it leads to under-performing performance to directly apply mask token (Devlin et al., 2018) and positional embedding to CNNs.
In this work, we carry out systematic experiments and show that MIM as a pre-training task essentially teaches the model to learn better middle-order interactions between patches for more generalized feature extraction regardless of the underlying network structure. Compared to the local texture features learned by low-order interactions between patches, more complex features such as shape and edge could be extracted via middle-order interactions among patches. The interaction of patches could be considered as information fusion via both the convolution operation of a CNN and the self-attention mechanism of a Transformer. That is to say, CNN and Transformer should both benefit from better middle-order interactions with MIM as the pre-text task.
To bridge the gap of MIM in terms of network architectures based on our extensive experimental analysis, we propose an Architecture-Agnostic Masked Image Modeling framework (A2MIM) that focuses on enhancing the middle-order interaction capabilities of the network. Specifically, we mask the input image with the mean RGB value and place the mask token at intermediate feature maps of the network. In addition, we propose a loss in the Fourier domain to further enhance the middle-order interaction capability of the network. Our contributions are summarized as follows:
• We conducted systematic experiments and showed the essence of MIM is to better learn middle-order interactions between patches but not reconstruction quality.
• We proposed a novel MIM-based framework dubbed A2MIM that bridges the gap between CNNs and Transformers. We are also the first to perform MIM on CNNs without adopting designs native to ViTs that outperforms contrastive learning counterparts.
• Extensive experiments with both Transformers and CNNs on ImageNet-1K and public benchmarks for various downstream tasks show that our method achieves performance improvement on pre-trained representation quality than state-of-the-art methods.
2 RELATED WORK
Contrastive Learning. Contrastive learning learns instance-level discriminative representations by extracting invariant features over distorted views of the same data. MoCo (He et al., 2020) and SimCLR (Chen et al., 2020b) adopted different mechanisms to introduce negative samples for contrast with the positive. BYOL (Grill et al., 2020) and its variants (Chen & He, 2020; Chen et al., 2021) further eliminate the requirement of negative samples to avoid representation collapse. Besides pairwise contrasting, SwAV (Caron et al., 2020) clusters the data while enforcing consistency between multi-augmented views of the same image. Barlow Twins (Zbontar et al., 2021) proposed to measure the cross-correlation matrix of distorted views of the same image to avoid representation collapsing. Meanwhile, some efforts have been made on top of contrastive methods to improve pre-training quality for specific downstream tasks (Xie et al., 2021a; Xiao et al., 2021; Selvaraju et al., 2021; Wu et al., 2022). MoCo.V3 (Chen et al., 2021) and DINO (Caron et al., 2021) adopted ViT (Dosovitskiy et al., 2021) in self-supervised pre-training to replace CNN backbones.
Autoregressive Modeling. Autoencoders (AE) are a typical type of network architecture that allows representation learning with no annotation requirement (Hinton & Zemel, 1993). By forcing denoising property onto the learned representations, denoising autoencoders (Vincent et al., 2008; 2010) are a family of AEs that reconstruct the uncorrected input signal with a corrupted version of the signal as input. Generalizing the notion of denoising autoregressive modeling, masked predictions attracted the attention of both the NLP and CV communities. BERT (Devlin et al., 2018) performs masked language modeling (MLM) where the task is to classify the randomly masked input tokens. Representations learned by BERT as pre-training generalize well to various downstream tasks. For CV, inpainting tasks (Pathak et al., 2016) to predict large missing regions using CNN encoders and colorization tasks (Zhang et al., 2016) to reconstruct the original color of images with removed color channels are proposed to learn representation without supervision. With the introduction of Vision Transformers (ViT) (Dosovitskiy et al., 2021; Liu et al., 2021), iGPT (Chen et al., 2020a) predicts succeeding pixels given a sequence of pixels as input. MAE (He et al., 2022) and BEiT (Bao et al., 2022) randomly mask out input image patches and reconstruct the missing patches with ViTs. Compared to MAE, MaskFeat (Wei et al., 2021) and SimMIM (Xie et al., 2021b) adopt linear layers as the decoder instead of another Transformer as in MAE. MaskFeat applied HOG as the prediction target instead of the RGB value. Other research endeavors (El-Nouby et al., 2021; Zhou et al., 2021; Assran et al., 2022; Akbari et al., 2021; Sameni et al., 2022) combine the idea of contrastive learning
(CL) with MIM. SplitMask (El-Nouby et al., 2021) proposed to use half of the image pixels to predict the other half while applying InfoNCE loss (Van den Oord et al., 2018) across the corresponding latent features. MSN (Assran et al., 2022) matches the representation of an image view containing randomly masked patches and the original unmasked image. Similarly, iBOT (Zhou et al., 2021) adopts the Siamese framework to combine self-distillation with MIM. Moreover, Data2Vec (Baevski et al., 2022) proposed a framework that applies the masked prediction idea for either speech, NLP, or CV. However, most MIM works are confined to ViT architectures, recently proposed CIM (Fang et al., 2022) adopts the output of a pre-trained tokenizer as target and takes the prediction of a frozen BEiT as input to the encoder as a workaround to enable MIM on CNNs. In this work, we propose A2MIM with no components native to ViTs adopted to perform MIM with ViTs and CNNs.
3 INTRIGUING PROPERTIES OF MASKED IMAGE MODELING
3.1 IS MIM BETTER IMAGE AUGMENTATION?
Compared to CNN, Transformers gain tremendous performance improvement with carefully designed image augmentation techniques such as RandAug(Cubuk et al., 2020), CutMix(Yun et al., 2019) and random erasing(Zhong et al., 2020). Random erasing(Zhong et al., 2020) randomly removes part of the image and replaces it with Gaussian noise, while Cutmix randomly removes part of the image and replaces the corresponding region with a patch from another image. Similarly, as in most MIM pre-training tasks, some image patches are masked out and replaced with a learnable mask token. Noticing the resemblance of the masking operations, we hypothesize that MIM as a pre-training task and masking-based data augmentations enhance the network’s robustness towards occlusion, enabling the network with a more generalized feature extraction ability. To verify our hypothesis, we design an occlusion robustness test. Let x ∈ R3×H×W be an input image and y ∈ RC be its corresponding label, where C is the class number. Consider a classification task y = f(x) where f denotes a neural network, the network is considered robust if the network outputs the correct label given an occluded version of the image x′, namely y = f(x′). For occlusion, we consider the patch-based random masking as adopted in most MIM works (He et al., 2022; Xie et al., 2021b; Wei et al., 2021). In particular, we split the image of size 224× 224 into patch size 16× 16 and randomly maskM patches out of the total number ofN patches. The occlusion ratio could then be defined as MN . We conduct experiments on ImageNet-100 (IN-100) (Krizhevsky et al., 2012b) for both Transformer and CNN with different settings. We choose ViT-S (Dosovitskiy et al., 2021) and ResNet-50(He et al., 2016) as the network architecture. Robustness is compared under the following settings: (i) random weight initialization with no image augmentation applied; (ii) random weight initialization with different image augmentations applied; (iii) MIM pre-training as weight initialization with and without image augmentations applied. In Fig. 1, we report the average top-1 accuracy across five runs trained with different settings under various occlusion ratios. Fig. 1(a) and 1(b) show that both MIM and patch-removing alike augmentations significantly improve model occlusion robustness for both ViT-S and ResNet-50. Nevertheless, MIM yields more robust feature extraction than adopting augmentations. Although MIM and patch-removing alike augmentations share similar masking mechanisms, MIM explicitly forces the model to learn the interactions between patches in order to reconstruct missing patches enabling more robust feature extraction. Comparing Fig. 1(a) and 1(b), the convex trend of accuracy from ViT-S indicates better robustness than the concave trend from ResNet-50. The self-attention mechanism of ViTs is able to model the interactions between patches with high degrees of freedom compared to CNNs constrained by convolution priors. We claim that
the success of MIM on ViTs can be seen as resonance in terms of better patch interactions imposed by MIM while supported by the self-attention mechanism of ViTs.
3.2 MIDDLE-ORDER INTERACTIONS FOR GENERALIZED FEATURE EXTRACTION
Next, we show that MIM essentially enables better middle-order interactions between patches. Note that existing MIM works adopt a medium or high masking ratio (Xie et al., 2021b; He et al., 2022) (e.g., 60% or 70%, see Fig. 2) during pre-training, and in these settings, the pairwise interactions between patches are under a middle-size context measured by the order m. Early inpainting work based on CNN (Pathak et al., 2016) resembles MIM but attracts little attention due to the much inferior performance to contrastive learning methods. The inpainting task adopts the masking strategy as illustrated in Fig. 1(c), which masks a full large region instead of random small patches. Such masking mechanisms ignore patch interaction and focus only on reconstruction leading to poor, learned representation quality. To investigate whether MIM makes the model more sensitive to patch interactions of some particular orders, we resort to the tool of multi-order interactions introduced by (Deng et al., 2022; Zhang et al., 2020). Intuitively, mth-order interactions of patches refer to inference patterns (deep features) induced from m number of patches of the original image in the input space. With a small value of m (low-order interactions), the model simply learns local features such as texture. Formally, the multi-order interaction I(m)(i, j) is to measure the order of interactions between patches i and j. We define I(m)(i, j) to be the average interaction utility between patches i and j on all contexts consisting of m patches. m indicates the order of contextual complexity of the interaction. Mathematically, given an input image x with a set of n patches N = {1, . . . , n} (e.g., an image with n pixels), the multi-order interaction I(m)(i, j) is defined as:
I(m)(i, j) = ES⊆N\{i,j},|S|=m[∆f(i, j, S)], (1)
where ∆f(i, j, S) = f(S ∪ {i, j}) − f(S ∪ {i}) − f(S ∪ {j}) + f(S). f(S) indicates the score of output with patches in N \ S kept unchanged but replaced with the baseline value (Ancona et al., 2019), where the context S ⊆ N . See Appendix B.2 for details. To measure the interaction complexity of the neural network, we measure the relative interaction strength J (m) of the encoded m-th order interaction as follow:
J (m) = Ex∈ΩEi,j |I(m)(i, j|x)|
Em′Ex∈ΩEi,j |I(m ′ )(i, j|x)|
, (2)
where Ω is the set of all samples and 0 ≤ m ≥ n − 2. J (m) is the average value over all possible pairs of patches of input samples. J (m) is normalized by the average value of all interaction strengths. J (m) then indicates the distribution (area under curve sums up to one) of the order of interactions of the network. In this work, we use J (m) as the metric to evaluate and analyze interaction orders of the network with MIM pre-training. We conduct experiments on IN-100 with image size 224× 224 and use ViT-S (Dosovitskiy et al., 2021) and ResNet-50 (He et al., 2016) as the network architecture. We consider a patch of size 16× 16 as an input patch. For the computation of J (m), we adopt the sampling solution following previous works (Deng et al., 2022; Zhang et al., 2020). As can be seen from Fig. 1(c) that ViT-S with random weight initialization tends to learn simple interactions with few patches (e.g., less than 0.05n patches) while MIM pre-trained models show a stronger interaction for relative middle-order (from 0.05n to 0.5n). Similarly, as observed from 1(d), MIM pre-trained
ResNet-50 enhances the middle-order interactions from 0.1n to 0.55n compared to random initialized models. Stronger middle-order interactions form more complex features such as shape and edge compared to local texture features learned from low-order interactions (Naseer et al., 2021).
4 APPROACH
We propose a generic MIM framework following two design rules: (a) No complex or non-generic designs are adopted to ensure compatibility with all network architectures. (b) Better middleorder interactions between patches for more generalized feature extraction. Figure 3 highlights the difference between our proposed framework and existing MIM frameworks in terms of three key components: masking strategy, encoder/decoder architecture design and prediction targets.
4.1 ARCHITECTURE AGNOSTIC FRAMEWORK
Mask Where Middle-order Interactions Occur. Existing works (El-Nouby et al., 2021; He et al., 2022; Xie et al., 2021b; Wei et al., 2021) adopt the masking strategy where the input image is divided into non-overlapping patches, and a random subset of patches are masked. MAE utilizes a Transformer as a decoder and takes only the visible patches into the encoder. Masked tokens are appended to the decoder to reconstruct the masked patches. SimMIM (Xie et al., 2021b) and MaskFeat (Wei et al., 2021) utilize a fully connected layer as the decoder and feed the mask token into the encoder together with the visible patches. The mask token (Devlin et al., 2018) is a token-shared learnable parameter that indicates the presence of missing patches to be predicted. Despite different choices of decoder structures, the mask token is either placed at the input to the encoder or the decoder. Mathematically, the masking process of MIM is defined as xmask = x (1−M) + T M , where M is the random occlusion mask, and T represents the learnable mask token. Such masking at the patch embedding layer aligns with the attention mechanism of Transformers, which is robust against occlusion. However, masking at the stem layer undermines the context extraction capability of CNN, which relies on local inductive biases. Moreover, masking at input stages of the network leads to loworder interactions. Thus, we propose to mask intermediate features where the output feature contains both the semantic and spatial information and the mask token can encode interactions with the medium number of tokens. More concretely, our masking operation is defined as zlmask = z
l + T D(M), where zl is the intermediate feature map of x at layer-l in the Transformer encoder (or for stage-l in CNNs) and D(·) is the corresponding down-sampling function of the occlusion mask.
Filling Masked Tokens with RGB Mean. It is worth noting that existing works directly replace the occluded patches with the mask token in the input space or after the patch embedding (Bao et al., 2022; Xie et al., 2021b). In contrast, we use the average RGB value to fill the occluded patches as the input to the encoder and add the mask token onto the intermediate feature maps of the encoder. The masking mechanism originates from NLP where languages are of high-level semantics and do not require low-level feature extraction as image processing. The introduction of a zero mask at the early stages of the network where low-level feature extraction happens is harmful in terms of feature extraction. From the view of Fourier domain, the RGB mean is the DC component of images. It not only brings about minimum local statistics variation caused by the masking operation but also forces the network to model rather more informative medium frequencies instead of filling the mask patches with blurry color blocks of low frequencies. The proposed masking strategy is generic to both convolution and self-attention in that it accommodates low-level to semantic-level feature extraction.
4.2 MIDDLE-ORDER INTERACTIONS FROM FOURIER PERSPECTIVE
Current works (El-Nouby et al., 2021; He et al., 2022; Xie et al., 2021b) adopt raw RGB values as the prediction target. However, raw pixels in the spatial domain are heavily redundant and often contain low-order statistics (Bao et al., 2022; Wei et al., 2021; Zhou et al., 2021). MaskFeat (Wei et al., 2021) adopts the Histogram of Oriented Gradients (HOG) as the prediction target outperforming MAE and SimMIM. HOG is a discrete descriptor of medium or high-frequency features which captures shape patterns based on middle-order interactions. ViTs and CNNs have low-pass and high-pass filtering properties, respectively (Park & Kim, 2022; 2021). ViTs and CNNs have certain frequency bands that they each cannot model well, and both cannot model middle-order interactions well (detailed in Appendix B.3). The observation of the medium frequency descriptor HOG improves middle-order interactions and leads to the hypothesis that learning medium frequencies would help the model learn more middle-order interactions. Given a RGB image x ∈ R3×H×W , the discrete Fourier transform (DFT) of each channel is defined as:
F(u,v) = h=H∑ h=1 w=W∑ w=1 x(h,w)e−2πj( uh H + vw W ). (3)
In addition to the common MIM loss in the spatial domain Lspa, we propose Lfreq in Fourier domain:
Lfreq = c=3∑ c=1 u=H∑ u=1 w=W∑ w=1 ω(u, v) ∥∥DFT(xpredc M + de(xpredc ) (1−M))−DFT(xc)∥∥ , (4)
where xpred is the predicted image, de(·) is detach gradient operation, and ω(u, v) is the frequency weighting matrix. ω(u, v) enables both ViTs and CNNs to model features of medium frequencies rather than local textures and noise corresponding to high frequencies. Inspired by Focal Frequency loss (Jiang et al., 2021), we define adaptive ω(u, v) as follows:
ω(u, v) = ∥∥DFT(xpredc M + det(xpredc ) (1−M))−DFT(xc)∥∥α , (5)
where α is a scaling factor, and we set α = 1. Fig. B.3 verifies that Eq. (5) allows the model to learn previously ignored frequencies (mostly the medium frequency components). Note that Lfreq introduces negligible overhead by using Fast Fourier Transform (FFT) algorithms with O(n log n) complexity to achieve DFT. The overall loss function of A2MIM is then defined as:
L = Lspa + λLfreq, (6) where Lspa = ∥∥xpred − x∥∥ M and λ is a loss weighting parameter. We set λ to 0.5 by default.
5 EXPERIMENTS
5.1 PRE-TRAINING SETUP
We adopt ResNet-50 (He et al., 2016) and Vision Transformer (Dosovitskiy et al., 2021) (ViTS/16 and ViT-B/16) as the backbone. We pre-train on ImageNet-1K (IN-1K) training set with AdamW (Loshchilov & Hutter, 2019) optimizer with a basic learning rate of 1.5× 10−4 adjusted by
a cosine learning rate scheduler and a batch size of 2048. The input image size is 224× 224 with a patch size of 32× 32. We use a random masking ratio of 60%. By default, the learnable mask tokens are placed at stage-3 in ResNet-50 and layer-5/layer-8 in ViT-S/ViT-B, respectively. We adopt a linear prediction head as the decoder (Xie et al., 2021b). A2MIM+ indicates adopting HOG as supervision and using the MLP decoder with depth-wise (DW) convolution. Our experiments are implemented on OpenMixup (Li et al., 2022) by Pytorch and conducted on workstations with NVIDIA V100 GPUs. We report the average results of 3 trials for all experiments and use bold and underline to indicate the best and the second-best performance. See Appendix A for detailed pre-training settings.
5.2 IMAGE CLASSIFICATION ON IMAGENET-1K
Evaluation Protocols. We first evaluate the learned representation by end-to-end fine-tuning (FT) and linear probing (Lin.) protocols on IN-1K. For evaluation on CNN, we adopt RSB A2/A3 (Wightman et al., 2021) training settings for fine-tuning on ResNet-50, which employs LAMB (You et al., 2020) optimizer with a cosine scheduler for 300/100 epochs. For the linear probing setting on ResNet-50, we freeze the backbone features and train a linear classifier with an initial learning rate of 30 and batch size of 256 following MoCo (He et al., 2020). For evaluation on Transformer, we employ the fine-tuning as MAE (He et al., 2022), which uses DeiT (Touvron et al., 2021) augmentation setting, an AdamW optimizer for 100-epoch training, and adopt a layer-wise learning rate decay of 0.65 following (Bao et al., 2022). See Appendix A for detailed evaluation configurations.
ResNet-50. We compare the proposed A2MIM with classical self-supervised learning methods (Inpainting (Pathak et al., 2016), Relative-Loc (Doersch et al., 2015), and Rotation (Gidaris et al., 2018)), contrastive learning (CL), and MIM methods with 100/300 pre-training epochs. We modified MIM methods to run them on ResNet-50: the learnable mask token is employed to the encoder of BEiT (Bao et al., 2022), Data2Vec (Baevski et al., 2022), and SimMIM (Xie et al., 2021b) after the
Table 3: Performance of object detection and semantic segmentation tasks based on ResNet50 on COCO and ADE20K.
Method Epochs COCO ADE-20K APbox APmask mIoU PyTorch (Sup.) 120 38.2 33.3 36.1 SimCLR 800 37.9 33.3 37.6 MoCoV2 400 39.2 34.3 37.5 BYOL 400 38.9 34.2 37.2 SwAV 800 38.4 33.8 37.3 SimSiam 400 39.2 34.4 37.2 Balow Twins 800 39.2 34.3 37.3 SimMIM‡ 300 39.1 34.2 37.4 CIM 300 - - 38.0 A2MIM 300 39.8 34.9 38.3
Table 4: Performance of object detection and semantic segmentation tasks based on ViT-B on COCO and ADE-20K.
Method Supervision Epochs COCO ADE-20K APbox APmask mIoU DeiT (Sup.) Label 300 47.9 42.9 47.0 MoCoV3 CL 300 47.9 42.7 47.3 DINO CL 400 46.8 41.5 47.2 BEiT DALLE 300 43.1 38.2 47.1 iBOT Momentum 400 48.4 42.7 48.0 MAE RGB 1600 48.5 42.8 48.1 MaskFeat HoG 800 49.2 43.2 48.8 SimMIM RGB 800 48.9 43.0 48.4 CAE DALLE 800 49.2 43.3 48.8 A2MIM RGB 800 49.4 43.5 49.0
stem (the output feature of 56× 56 resolutions); the encoder of MAE randomly selects 25% from 56× 56 output features of the stem as unmasked patches and takes the reorganized 28× 28 patches as the input of four stages. As shown in Tab. 1, our approach achieves competitive performance with state-of-the-art contrastive-based methods under 100-epoch RSB A3 fine-tuning. Note that MIM methods see fewer training samples per epoch than CL methods (40% vs. 200% of patches) and usually require longer pre-training epochs. Based on a longer fine-tuning evaluation using RSB A2, our method (300-epoch) outperforms contrastive-based methods with even fewer training epochs. Meanwhile, A2MIM also improves the baseline SimMIM† (+0.8%) and the concurrent work CIM (+0.4%) in terms of RSB A3 fine-tuning for the longer pre-training. Besides, we also report the linear probing accuracy in the fast pre-training for reference, although our main focus is to learn representations with better fine-tuning performances. The linear probing performance of our method is lower than contrastive-based methods, it still improves the baseline by 0.6%.
ViT. We then evaluate A2MIM based on ViT-S/B in Tab. 2. We list the supervision target used by various pre-training methods in the second column of Tab. 2. DALL-E (Ramesh et al., 2021) and VQGAN (Esser et al., 2021) are pre-trained image tokenizers, while momentum refers to the momentum encoder. Our approach outperforms current state-of-the-art methods with complex supervision, e.g., SplitMask (MIM with CL combined), iBOT (complex teacher-student architecture), and CIM (pre-trained BEiT as supervision). Based on ViT-S/B, A2MIM improves the baseline SimMIM by 0.5%/0.4% with RGB as supervision and 0.7%/0.7% with the HOG feature as supervision.
5.3 TRANSFER LEARNING EXPERIMENTS
Object detection and segmentation on COCO. To verify the transferring abilities, we benchmark CL and MIM methods on object detection and segmentation with COCO (Lin et al., 2014). For evaluation on CNN, we follow the setup in MoCo, which fine-tunes Mask R-CNN (He et al., 2017) with ResNet-50-C4 backbone using 2× schedule on the COCO train2017 and evaluates on the COCO val2017. Results in Tab. 3 indicate that our approach (300-epoch) outperforms contrastivebased methods with longer pre-training (+0.7% APbox and +0.6% APmask). For evaluation on Transformer, we follow MAE and CAE, which efficiently fine-tunes Mask R-CNN with ViT-B backbone using 1× schedule. In Tab. 4, our approach (800-epoch) is superior to popular contrastivebased and MIM methods, e.g., outperforms MAE (1600-epoch) by 0.9% APbox and 0.8% APmask.
Semantic segmentation on ADE20K. We then evaluate the transferring performances on semantic segmentation with ADE20K (Zhou et al., 2019) by fine-tuning UperNet (Xiao et al., 2018). Based on ResNet-50, all CNN models are fine-tuned for 160K iterations with SGD following MoCo. Results in Tab. 3 show that our method outperforms CL methods by at least 0.9% mIoU and outperforms CIM (required extra pre-trained BEiT (Bao et al., 2022)) by 0.3% mIoU. Based on ViT-B, we fine-tune models for 80K iterations with AdamW following MAE. Tab. 4 shows that our approach consistently improves MIM methods (e.g., outperforms MAE and SimMIM by 0.9% and 0.6% mIoU).
5.4 ABLATION STUDY
We next verify the effectiveness of the proposed components. Ablation studies are conducted with ResNet-50 and ViTs on IN-100 and IN-1K using the fine-tuning protocol. Based on the modified baseline SimMIM (Lspa), we first compare different mask token mechanisms: Replacing denotes the original way in most MIM methods, and Addition denotes our proposed way that adds the mask token to intermediate feature maps of the backbone. As shown in Fig. 5, adding the mask token to the medium stages (stage-3) or layers (layer-5) yields the best performance. Replacing masked patches in input images by RGB mean value slightly improves the baseline SimMIM, especially for ResNet-50 (88.19 vs. 87.75 on IN-100). Then, we verify the proposed Lfreq in Tab. 5. We find that simply using Lfreq without the adaptive re-weighting ω (Eqn. 5) brings limited improvements as the frequency constraint to Lspa, while employing ω further enhances performances by helping the model to learn more informative frequency components. Additionally, we visualize reconstruction results in Fig. 4 to show the improvements brought by our proposed components (more results in Appendix B).
5.5 VERIFICATION OF A2MIM DESIGN RULES Table 6: Analysis of the scalability A2MIM for advanced components on IN-1K.
Module ResNet-50 ViT-B Linear 78.8 82.4
2-layer MLP 78.8 82.4 Decoder 2-layer MLP (w/ DW) 78.9 82.5
2-layer Transformer 78.6 82.3 2-layer Transformer (w/ DW) 78.8 82.6
RGB 78.8 82.4 Target HoG Feature 78.9 82.6
DINO Feature 78.9 82.7 We verify whether A2MIM meets the intended design rules using the same experiment settings as Sec. 5.4: (i) A2MIM is generic to incorporate advanced components proposed in previous works (e.g., complex decoders, advanced prediction targets). As for the decoder structure, we replace the original linear decoder with 2-layer MLP or Transformer decoders, but find limited improvements or degenerated performances (similar to SimMIM) in Tab. 6. Inspired by PVT.V2 (Wang et al., 2022), we introduce a depth-wise (DW) convolution layer (w/ DW) to the MLP decoder (adding a 5×5 DW layer in between) and the Transformer decoder (adding a 3× 3 DW layer in each FFN (Wang et al., 2022)), which brings improvements compared to the linear decoder. As for the prediction target, we follow MaskFeat to change the RGB target to the HoG feature or the output feature from ViT-B/16 pre-trained 1600-epoch by DINO (Caron et al., 2021). Tab. 6 shows that using advanced targets significantly improves the performance of A2MIM for both ResNet-50 and ViT-B. Therefore, we can conclude A2MIM is a generally applicable framework. (ii) A2MIM enhances occlusion robustness and middle-order interaction among patches from experiments on ImageNet-1K in Fig. A3.
6 CONCLUSION
In this paper, we delved deep into MIM and answered the question of what exactly is learned during MIM pre-training. We adopted multi-order interactions to study the interaction order among image patches. We discovered that MIM essentially teaches the network to learn middle-order interactions among image patches for more complex feature extraction regardless of the network architecture. Based on our findings, we further proposed a general framework A2MIM that is compatible with both Transformers and CNNs for MIM tasks aiming at enhancing patch interactions during self-supervised pre-training. Besides a different mask token mechanism, we proposed a loss in the Fourier domain to better learn the middle-order interaction. Experimental results have shown that our proposed framework improves the representations learned for both CNNs and Transformers yielding superior performance than state-of-the-arts on various downstream tasks.
A DETAILS OF COMPARISON EXPERIMENTS
This section provides experimental details for Sec. 5, e.g., pre-training and evaluation on ImageNet-1K and transfer learning settings on downstream tasks.
A.1 IMAGENET-1K EXPERIMENTS
Pre-training. The default settings of A2MIM for ResNet-50 and ViTs are provided in Tab. A1, following SimMIM (Xie et al., 2021b). We use AdamW (Loshchilov & Hutter, 2019) optimizer with the cosine scheduler and the linear learning rate scaling rule (Goyal et al., 2020): lr = base lr×batchsize / 256. Similar to current MIM methods, we only use RandomResizedCrop with the scale of (0.67, 1.0) and do not employ other complex augmentations (e.g., Rand Augment (Cubuk et al., 2020), mixups (Yun et al., 2019), or stochastic depth) during pre-training. As for ViTs, we adopt Cosine decay for 100 and 300 epochs pre-training while using Step decay (the learning rate multiplied 0.1 at 700-epoch) for 800-epoch pre-training.
End-to-end fine-tuning. Our fine-tuning settings follow common practices of supervised image classification on ImageNet-1K. As shown in Tab. A2, we fine-tune pre-trained ViTs for 100 epochs using the DeiT (Touvron et al., 2021) training recipe, which employs AdamW (Loshchilov & Hutter, 2019) optimizer with the cross-entropy (CE) loss; we fine-tune pre-trained ResNet-50 for 100/300 epochs using RSB A3/A2 (Wightman et al., 2021) settings, which employs LAMB (You et al., 2020) optimizer with the binary cross-entropy (BCE) loss. Additionally, we use layer-wise learning rate decay as (Bao et al., 2022) for fine-tuning ViT models.
Table A1: ImageNet-1K A2MIM pre-training settings for ResNet-50 and ViT models.
Configuration ResNet-50 ViTs Pre-training resolution 224× 224 224× 224 Mask patch size 32× 32 32× 32 Mask ratio 60% 60% Optimizer AdamW AdamW Base learning rate 1.5× 10−4 1× 10−4 Weight decay 0.05 0.05 Optimizer momentum β1, β2=0.9, 0.999 β1, β2=0.9, 0.999 Batch size 2048 2048 Learning rate schedule Cosine Cosine / Step Warmup epochs 10 10 RandomResizedCrop 3 3 Rand Augment 7 7 Stochastic Depth 7 7 Gradient Clipping 7 max norm= 5
Table A2: ImageNet-1K fine-tuning recipes for ResNet-50 (RSB A2/A3) and ViTs (DeiT).
Configuration ViTs ResNet-50 DeiT RSB A2 RSB A3 FT epochs 100 300 100 Training resolution 224 224 160 Testing resolution 224 224 224 Testing crop ratio 0.875 0.95 0.95 Optimizer AdamW LAMB LAMB Base learning rate 2.5× 10−4 1.5× 10−3 1× 10−3 Weight decay 0.05 0.02 0.02 Batch size 1024 2048 2048 Learning rate schedule Cosine Cosine Cosine Warmup epochs 5 5 5 Label smoothing 0.1 7 7 Stochastic depth 0.1 0.05 7 Gradient clipping 5.0 7 7 Rand Augment (9, 0.5) (7, 0.5) (6, 0.5) Mixup alpha 0.8 0.1 0.1 CutMix alpha 1.0 1.0 1.0 Loss function CE loss BCE loss BCE loss
A.2 OBJECT DETECTION AND SEGMENTATION ON COCO
We adopt Mask-RCNN (He et al., 2017) framework to perform transfer learning to object detection and segmentation on COCO (Lin et al., 2014) in Detectron21. For evaluation on ResNet-50, we follow MoCo (He et al., 2020) and fine-tune Mask R-CNN with the pre-trained ResNet-50-C4 backbone using 2× schedule (24 epochs). For evaluation of ViTs, we follow MAE (He et al., 2022), which employs the pre-trained ViT backbone and an FPN neck (Lin et al., 2017) in Mask R-CNN, and fine-tune the model using 1× schedule (12 epochs). For a fair comparison, we follow (Bao et al., 2022; Xie et al., 2021b) to turn on relative position bias in ViT (Dosovitskiy et al., 2021) during both pre-training and transfer learning, initialized as zero.
A.3 SEMANTIC SEGMENTATION ON ADE-20K
We adopt UperNet (Xiao et al., 2018) to perform transfer learning to semantic segmentation on ADE-20K and use the semantic segmentation implementation in MMSegmentation2. We initialize
1https://github.com/facebookresearch/detectron2 2https://github.com/open-mmlab/mmsegmentation
0 10 20 30 40 50 60 70 80 90 100
Occlusion ratio (%)
0
20
40
60
80
To p
1 Ac
cu ra
cy (%
)
ViT S Random PatchDrop
BYOL + FT (DeiT) MoCoV3 + FT (DeiT) MoCoV3 + FT (Vanilla) MAE + FT (DeiT) SimMIM + FT (DeiT) SimMIM + FT (Vanilla)
(a)
0 10 20 30 40 50 60 70 80 90 100
Occlusion ratio (%)
0
20
40
60
80
To p
1 Ac
cu ra
cy (%
)
ResNet50 Random PatchDrop BYOL + FT (DeiT) BYOL + FT (Vanilla) MoCoV3 + FT (DeiT) SimMIM + FT (DeiT) SimMIM + FT (Vanilla)
(b)
0 0.1 0.3 0.5 0.7 0.9 1.0 Order / n
0
1
2
3
4
5
6
In ter
ac tio
n St
re ng
th ViT S Interaction Strength of Order BYOL + FT (Vanilla) MoCoV3 + FT (Vanilla) MAE + FT (Vanilla) SimMIM + FT (Vanilla)
(c)
0 0.1 0.3 0.5 0.7 0.9 1.0 Order / n
0.0
0.5
1.0
1.5
2.0
2.5
3.0
3.5
In ter
ac tio
n St
re ng
th
ResNet50 Interaction Strength of Order BYOL + FT (Vanilla) MoCoV3 + FT (Vanilla) SimMIM + FT (Vanilla)
(d)
Figure A1: (a)(b): Robustness against different occlusion ratios of images (CL vs. MIM) is studied for both ViT-S and ResNet-50 on ImageNet-100. (c)(d): Distributions of the interaction strength J (m) (CL vs. MIM) are explored for both ViT-S and ResNet-50 on ImageNet-100. The label indicates the pre-training method + fine-tuning augmentation used, random stands for random weight initialization.
the UperNet using the pre-trained backbones (ResNet-50 or ViTs) on ImageNet-1K. For ViTs, we fine-tune end-to-end for 80K iterations by AdamW with a batch size of 16. We search a optimal layerwise decay from {0.8, 0.9} and search optimal a learning rate from {1× 10−4, 2× 10−4, 3× 10−4} for all competitors. Similar to fine-tuning settings on COCO, we use relative position bias in ViT (Dosovitskiy et al., 2021) during both pre-training and transfer learning as (Bao et al., 2022; Xie et al., 2021b). For ResNet-50, we follow MoCo (He et al., 2020), i.e., all CNN models are fine-tuned for 160K iterations by SGD with the momentum of 0.9 and a batch size of 16.
B EMPIRICAL EXPERIMENTS
This section provides background information and experimental details for Sec. 3. We also provide additional results of occlusion robustness evaluation and multi-order interaction strength.
B.1 OCCLUSION ROBUSTNESS
In Sec. 3.1, we analyze robustness against occlusion of fine-tuned models on ImageNet-100 (a subset on ImageNet-1K divided by (Tian et al., 2020)) using the official implementation3 provided by Naseer et al. (2021). Both MIM and contrastive-based methods are pre-trained 400 epochs on ImageNet-100 using their pre-training settings on ImageNet-1K. We adopt the fine-tuning training recipe as DeiT in Tab. A2 and use the same setting (100-epoch) for both ViT-S and ResNet-50. Note that we use the modified SimMIM for ResNet-50 (replacing masked patches in the input image with the RGB mean) in all experiments.
As shown in Fig. 1 and A1, we compared MIM pre-trained models supervised methods with various augmentations and contrastive learning pre-trained methods in terms of the top-1 accuracy under various occlusion ratios. We find that MIM methods show better occlusion robustness on both Transformers and CNNs. In addition to Sec. 3.1, we also provide results of salient occlusion for ViT-S and ResNet-50 on ImageNet-100 in Fig. A2. Note that the occlusion ratio means the ratio of dropped and total patches and we plot the mean of accuracy across 3 runs. We can conclude that MIM pre-trained models have stronger robustness against random and salient occlusions than supervised and contrastive-based methods.
B.2 MULTI-ORDER INTERACTION
In Sec. 3.2, we interpret what is learned by MIM by multi-order interaction (Deng et al., 2022; Zhang et al., 2020). The interaction complexity can be represented by I(m)(i, j) (defined in Eqn. 1), which measures the average interaction utility between variables i, j on all contexts consisting ofm variables. Notice that the order m reflects the contextual complexity of the interaction I(m)(i, j). For example, a low-order interaction (e.g., m = 0.05n) means the relatively simple collaboration between variables i, j, while a high-order interaction (e.g., m = 0.05n) corresponds to the complex collaboration. As figured out in the representation bottleneck (Deng et al., 2022), deep neural networks (DNNs) are more likely to encode both low-order interactions and high-order interactions, but often fail to learn middle-order interactions. We hypothesize that MIM helps models learn more middle-order
3https://github.com/Muzammal-Naseer/Intriguing-Properties-of-Vision-Tra nsformers
0 10 20 30 40 50 60 70 80 90 100
Occlusion ratio (%)
0
20
40
60
80
To p
1 Ac
cu ra
cy (%
)
ViT S Random PatchDrop
BYOL + FT (DeiT) MoCoV3 + FT (DeiT) MAE + FT (DeiT) SimMIM + FT (DeiT) Random + FT (DeiT) Random + FT (Vanilla)
(a)
0 10 20 30 40 50 60 70 80 90 100
Occlusion ratio (%)
0
20
40
60
80
To p
1 Ac
cu ra
cy (%
)
ViT S Salient PatchDrop BYOL + FT (DeiT) MoCoV3 + FT (DeiT) MAE + FT (DeiT) SimMIM + FT (DeiT) Random + FT (DeiT) Random + FT (Vanilla)
(b)
0 10 20 30 40 50 60 70 80 90 100
Occlusion ratio (%)
0
20
40
60
80
To p
1 Ac
cu ra
cy (%
)
ResNet50 Random PatchDrop BYOL + FT (DeiT) MoCoV3 + FT (DeiT) SimMIM + FT (DeiT) Random + FT (DeiT) Random + FT (Vanilla)
(c)
0 10 20 30 40 50 60 70 80 90 100
Occlusion ratio (%)
0
20
40
60
80
To p
1 Ac
cu ra
cy (%
)
ResNet50 Salient PatchDrop BYOL + FT (DeiT) MoCoV3 + FT (DeiT) SimMIM + FT (DeiT) Random + FT (DeiT) Random + FT (Vanilla)
(d) Figure A2: Robustness against various random or salient occlusion ratios of images is studied in (a)(b) for ViT-S, and (c)(d) for ResNet-50 using various experimental settings on ImageNet-100. The label indicates the pre-training method + fine-tuning setting used, random stands for random weight initialization.
0 10 20 30 40 50 60 70 80 90 100
Occlusion ratio (%)
0
20
40
60
80
To p
1 Ac
cu ra
cy (%
)
ViT S Random PatchDrop
MoCoV3 + FT (DeiT) A2MIM + FT (DeiT) SimMIM + FT (DeiT) PyTorch + FT (DeiT)
(a)
0 10 20 30 40 50 60 70 80 90 100
Occlusion ratio (%)
0
20
40
60
80
To p
1 Ac
cu ra
cy (%
)
ResNet50 Random PatchDrop BYOL + FT (A3) MoCoV3 + FT (A3) A2MIM + FT (A3) SimMIM + FT (A3) PyTorch + FT (A3)
(b)
0 0.1 0.3 0.5 0.7 0.9 1.0 Order / n
0.0
0.5
1.0
1.5
2.0
2.5
3.0
3.5
In ter
ac tio
n St
re ng
th ViT S Interaction Strength of Order MoCoV3 + FT (DeiT) A2MIM + FT (DeiT) SimMIM + FT (DeiT) PyTorch + FT (DeiT)
(c)
0 0.1 0.3 0.5 0.7 0.9 1.0 Order / n
0.0
0.5
1.0
1.5
2.0
2.5
3.0
3.5
4.0
In ter
ac tio
n St
re ng
th ResNet50 Interaction Strength of Order BYOL + FT (A3) MoCoV3 + FT (A3) A2MIM + FT (A3) SimMIM + FT (A3) PyTorch + FT (A3)
(d) Figure A3: Verification of robustness and interaction of A2MIM with ViT-S and ResNet-50 on ImageNet-1K. (a)(b): Robustness against different occlusion ratios of images is studied for A2MIM and various methods. (c)(d): Distributions of the interaction strength J (m) are explored.
interactions since MIM has a natural advantage in cases where some parts of the image are masked out. In Fig. 1, we calculate the interaction strength J (m) (defined in Eqn. 2) for fine-tuned models on ImageNet-100 using the official implementation4 provided by Deng et al. (2022). Specially, we use the image of 224× 224 resolution as the input and calculate J (m) on 14× 14 grids, i.e., n = 14× 14. And we set the model output as f(xS) = log
P (ŷ=y|xS) 1−P (ŷ=y|xS) given the masked sample xS , where y
denotes the ground-truth label and P (ŷ = y|xS) denotes the probability of classifying the masked sample xS to the true category.
B.3 MIM FROM FREQUENCY PERSPECTIVE
We first plot the log magnitude of Fourier transformed feature maps of ResNet-50 with different pretraining methods using the tools5 provided by Park & Kim (2022) on ImageNet-1K. Following (Park & Kim, 2022), we first convert feature maps into the frequency domain and represent them on the normalized frequency domain (the highest frequency components are at {−π,+π}). In Fig. 4(a), we report the amplitude ratio of high-frequency components by using ∆ log amplitude. As shown in Fig. 4(a), inpainting and MIM show similar low-pass filtering effects at convolution layers as compared to contrastive learning. This indicates that inpainting and MIM reduce noise and uncertainty induced by high-frequency features. We argue that the reconstruction performance of MIM is mainly related to low or high-order interactions of patches (Deng et al., 2022), while reconstruction performance is not directly related to the learned representation quality. Then, we provide the standard deviation of feature maps by block depth as (Park & Kim, 2022; 2021), which first calculates the feature map variance on the last two dimensions and then averages over the channel dimension for the whole dataset. Fig. 4(b) shows the feature variance of each layer of ResNet-50 with different pre-training methods on IN-1K. This figure indicates that MIM tends to reduce the feature map variance, and conversely, supervised training, inpainting, and contrastive learning based on CNN tend to increase variance. Compared to MIM, which learns better middle-order interactions, the inpainting task fails to filter out low-order interactions and thus leads to higher variance. To conclude, MIM methods learn middle-order interactions and reduce the feature map uncertainty (high frequencies) based on the CNN encoder for a generalized and stabilized feature extraction.
4https://github.com/Nebularaid2000/bottleneck 5https://github.com/xxxnell/how-do-vits-work
0.0 0.2 0.4 0.6 0.8 1.0 Normalized Depth
-7.0
-6.0
-5.0
-4.0
-3.0
-2.0
-1.0
Lo g
Am pl
itu de
BYOL MoCoV3 Inpainting SimMIM DeiT (Sup. ) Random
(a)
0.0 0.2 0.4 0.6 0.8 1.0 Normalized Depth
0.00
1.00
2.00
3.00
4.00
Fe at
ur e
M ap
Va ria
nc e
BYOL MoCoV3 Inpainting SimMIM DeiT (Sup. ) Random
(b) Figure A4: (a) Fourier transformed feature maps. The vertical axis is the relative log amplitudes of the high-frequency components, and the horizontal axis is the normalized depth of the network. The blue columns indicate the pooling layers, while the white columns indicate the convolution layers. (b) Feature maps variance. The vertical axis is the average variance value of feature maps. DeiT (Sup.) is supervised pre-training. The results of the randomly initialized network are plotted for reference.
Fo x
Raw image Prediction image
Fourier spectrum Fourier spectrum
w/o
Loss weight (Fourier spectrum) Figure A5: Visualization of predicted images and Lfreq loss weight in Fourier domain. From the view of the Fourier spectrum, the raw image (left) contains 99% low-frequency components (usually present contents) and rich medium-frequency (structural patterns) and high-frequency components (local details and noises), while the predicted result (middle) provides fewer medium or highfrequency components. Calculated in the Fourier domain, the loss weights (right) of Lfreq w/o w help the model to learn the full spectrum while Lfreq focusing on the low and medium-frequency parts, which are more likely to be low-order or middle-order interactions.
C MORE EXPERIMENT RESULTS
C.1 ABLATION OF THE PROPOSED MODULES
In addition to ablation studies in Sec. 5.4, we provide ablation study on the proposed Lfreq in the Fourier domain, as shown in Figure A5. As we discussed in Sec. 4, we hypothesize that learning medium frequencies would help better learn middle-order interactions. we thereby propose Lfreq to tackle the dilemma of Lspa, which tends to learn low-frequency components (i.e., contents reflected by high-order interactions). Although the reconstruction loss in the Fourier domain has a global perception, the high-frequency components are usually constructed by local details and noises (i.e., low-order interactions), which might hurt the generalization abilities. Therefore, we introduce the reweight w(u, v) to force the model to learn more medium-frequency components, which are identical to middle-order interactions. Then, we perform further analysis of the masked patch size for A2MIM in Tab. A3. Note that we pre-train ResNet-50 for 100 epochs and ViT-B for 400 epochs on ImageNet-1K and report the fine-tuning results. As shown in Tab. A3, when the mask ratio is 60%, the optimal masked patch size is 32× 32 for A2MIM, which is the same as SimMIM.
Table A3: Ablation of masked patch size for A2MIM based on ResNet-50 and ViT-B on ImageNet-1K. Model Masked Mask PT Top-1 Accuracy (%)
patch size ratio epoch ResNet-50 8 / 16 / 32 / 64 0.6 100 78.2 / 78.6 / 78.8 / 78.7
ViT-B 8 / 16 / 32 / 64 0.6 400 82.9 / 83.4 / 83.5 / 83.3
C.2 ANALYSIS OCCLUSION ROBUSTNESS AND INTERACTION OF A2MIM
We further analyze occlusion robustness and interaction strength of A2MIM with ViT-S (pre-training 400-epoch) and ResNet-50 (pre-training 100-epoch) on ImageNet-1K, as shown in Fig. A3. Fig. 3(a) and 3(b) shows that A2MIM is more robust to occlusion than the baseline SimMIM and contrastive learning methods with both Transformers and CNNs. Meanwhile, we find that MIM methods learn more balanced interaction strength than both supervised and contrastive learning methods in Fig. 3(c) and 3(d). A2MIM further improves SimMIM by capturing more middle-order interactions (0.2n to 0.6n) with Transformers and CNNs. Therefore, we can conclude that A2MIM helps the model to learn better middle-order interactions between patches for more generalized visual representation.
C.3 SCALING-UP A2MIM
Additionally, we scale up the model size of backbone encoders to verify the performance of A2MIM with ResNet and ViT on ImageNet-1K. As shown in Table A4, our proposed A2MIM and its advanced variant A2MIM+ consistently improve both the contrastive-based and MIM methods on all scale architectures, e.g., A2MIM outperforms SimMIM by 0.5%/0.5%/0.5%/0.2% and 0.6%/0.4% based on ViT-S/B/L/H and ResNet-50/101, demonstrating that A2MIM is an architecture-agnostic and salable framework for MIM pre-training.
Table A4: ImageNet-1K fine-tuning (FT) top-1 accuracy (%) with ResNet (R) and ViT of various model scales. We adopt the 100-epoch fine-tuning protocols for both architectures.
Methods Supervision ViT-S ViT-B ViT-L ViT-H R-50 R-101 Sup. Label 79.9 81.8 82.6 83.1 78.1 79.8 MoCoV3 CL 81.4 83.2 84.1 - 78.7 - DINO CL 81.5 83.6 - - 78.7 - MAE RGB - 83.6 85.9 86.9 77.1 - SimMIM RGB 81.7 83.8 85.6 86.8 78.2 80.0 MaskFeat HoG - 84.0 85.7 - 78.4 - A2MIM RGB 82.2 84.2 86.1 87.0 78.8 80.4 A2MIM+ HoG 82.4 84.5 86.3 87.1 78.9 80.5
D VISUALIZATION EXPERIMENTAL DETAILS
In addition to visualization results in Sec. 5.4, we visualize more reconstruction results of A2MIM here. Similar to Fig. 4, we ablate the proposed components in A2MIM based on ResNet-50 in Fig. A6, which demonstrates that A2MIM helps ResNet-50 learn more spatial details, i.e., more middle-order interactions. Moreover, we study the effects of the mask token in both ViTs and CNNs in Fig. A7.
Raw image
Fo x
C uc
um be
r
Masked imageMasked image
Zero mask RGB mean mask
Ba llo
on
Figure A6: Visualizations of predicted results from SimMIM (middle) and our A2MIM (right) based on ResNet-50 pre-trained 100-epochs on ImageNet-1K. Notice that T (s∗) denotes the mask token T to the optimal stage-s in ResNet-50. We ablate the proposed components by adding them to the baseline SimMIM: replacing the zero mask with the RGB mean mask (the modified SimMIM baseline) and adding the mask token T (s∗) relieve grid-like artifacts in predicted results; adding the proposed Lfreq helps the model to capture more informative details.
Raw image
G ol
df is
h
ViT-B
Ba llo
on
Masked image
ViT-B ResNet-50
Remove learned mask token
Remove learned mask token
Remove learned mask token
Figure A7: Visualizations of predicted results with and without the mask token on ImageNet-1K. Notice that mask tokens are adopted in the pre-trained models based on ViT-S (300-epoch) or ResNet-50 (100-epoch). Based on ViT-S, removing the mask token corrupts both contents of masked patches and overall colors in SimMIM while only corrupting the masked contents in A2MIM. Based on ResNet-50, removing the mask token slightly affects spatial details in the masked patches and causes grid-like artifacts in the unmasked patches. The different effects of the mask token in ViT-S and ResNet-50 might be because the two architectures use different spatial-mixing operators and normalization layers. As for ViTs, the self-attention operation captures informative details from unmasked patches, but the non-overlap patch embedding and layer normalization mask each patch isolated. The mask token learns the mean templates (contents) of masked patches and gathers spatial details from unmasked patches by the self-attention operation. As for CNNs, each patch shares the same contents extracted by batch normalization layers, and the convolution operation extract features from unmasked and masked patches equally. The mask token learns more high-frequency and informative details. | 1. What is the focus and contribution of the paper on computer vision and self-supervised learning?
2. What are the strengths of the proposed approach, particularly in its ability to generalize to different neural network architectures and its insight into middle-order interactions?
3. What are the weaknesses of the paper, such as some parts being not very easy for the reader to follow, and how could they be improved?
4. Are there any open questions or suggestions for future research related to the methodological improvements proposed in the paper?
5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content, including its discussion of related works and potential limitations? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
This work proposes a variant of BERT-like pretraining method for computer vision (the so-called Masked Image Modeling). The method can generalize well to both vision transformers and CNNs, thus is described as architecture agnostic.
The main contributions of this work are:
using the mean color values to fill the corrupted pixels, which makes the self-supervised pipeline suitable for different neural network architectures;
giving a nice insight that the middle-order interactions are important for visual representation, and making two improvements to standard BERT algorithm to facilitate their learning (i. masking intermediate features and ii. introducing supervision in the frequency domain);
and providing some empirical evidence that supports its validity.
Strengths And Weaknesses
[strengths]
[S1] This work presents a fresh perspective to rethink the self-supervised learning in computer vision, say, the middle-order interaction. The "interaction strength" mentioned in the paper is a useful indicator to show how good the model is at modeling long-range dependency, which is actually considered to be a valuable aspect in advancing pre-training for NLP [1,2].
[S2] The authors have put considerable effort into how to fairly compare the established algorithms, which is appreciated and allows the hypotheses presented in their paper to be fully tested. The ablation experiments also demonstrate the validity of different components in their method.
[weakness]
There are some parts of the article that are not very easy for the reader to follow, which could be the main weakness. I would suggest authors to explain more about the following aspects (which I find enlightening) and eventually add them to their manuscript or appendix:
[W1] In the Introduction, the authors claim that "it is not straightforward to directly apply mask token to CNNs". Can we replace the masked part (e.g., 16x16x3=768 pixels) with a "mask token" (e.g., 768 learnable scalars) like BERT does? This is a straightforward way to mask. Or would it be more appropriate to describe this way as "underperforming" than "not straightforward"? (considering the mediocre performance of MAE, SimMIM, etc. in Tab. 1 of the paper)
[W2] "MAE takes the reorganized the unmasked input patches of 112x112 as the input", can the authors explain more on this? I wonder if a substitution like the above (768 pixels to a 768-dimensional learnable vector) or some other operation has been performed.
[W3] According to the analysis in Sec. 3.2, middle-order interactions seem to manifest a medium- or long-range inter-patch dependency. Why the authors say "middle-order interactions could be enhanced via guiding the network to learn features of certain frequencies"? Does "certain frequencies" refer to medium or high frequencies? Why does increasing the richness of these frequencies on feature maps can promote middle-order interactions?
[W4] In addition, there appears to be some related work that has not been adequately discussed. See the "Clarity, Quality, Novelty And Reproducibility".
[open questions]
[O1] The methodological improvements to BERT-like pre-training in this paper seems to be incremental. Considering that the authors propose some quantitative metrics to describe middle-order operations, is it possible to design a more principled algorithm to explicitly facilitate such middle-order interactions? I believe the insights on middle-order are valuable, but the solutions proposed in the article do not seem to fully exploit their values.
[1] Jawahar, Ganesh, Benoît Sagot, and Djamé Seddah. "What Does BERT Learn about the Structure of Language?." Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. 2019.
[2] Xu, Jiacheng, et al. "Discourse-Aware Neural Extractive Text Summarization." Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. 2020.
Clarity, Quality, Novelty And Reproducibility
[missing discussions that may weaken the novelty]
In the Related Work section, the authors refer to the work CIM [3]. This is an Electra-like [4] method that fills the corrupted pixels via a small network instead of mean color values, and thus [4] claimed they are the first to demonstrates that both ViT and CNN can learn rich visual representations using a unified, non-Siamese framework. Authors are encouraged to discuss A
2
MIM and CIM in more detail, and to update some of the corresponding descriptions. Although this would weaken the novelty of A
2
MIM, the related works deserve to be discussed fairly and pertinently.
[Reproducibility]
I believe one can easily reproduce the main results in this work given the detailed experimental configurations and source codes.
[3] Fang, Yuxin, et al. "Corrupted image modeling for self-supervised visual pre-training." arXiv preprint arXiv:2202.03382 (2022).
[4] Clark, Kevin, et al. "Electra: Pre-training text encoders as discriminators rather than generators." arXiv preprint arXiv:2003.10555 (2020). |
ICLR | Title
Architecture-Agnostic Masked Image Modeling -- From ViT back to CNN
Abstract
Masked image modeling (MIM), an emerging self-supervised pre-training method, has shown impressive success across numerous downstream vision tasks with Vision transformers (ViTs). Its underlying idea is simple: a portion of the input image is randomly masked out and then reconstructed via the pre-text task. However, the working principle behind MIM is not well explained, and previous studies insist that MIM primarily works for the Transformer family but is incompatible with CNNs. In this paper, we first study interactions among patches to understand what knowledge is learned and how it is acquired via the MIM task. We observe that MIM essentially teaches the model to learn better middle-order interactions among patches and extract more generalized features. Based on this fact, we propose an Architecture-Agnostic Masked Image Modeling framework (AMIM), which is compatible with both Transformers and CNNs in a unified way. Extensive experiments on popular benchmarks show that our AMIM learns better representations without explicit design and endows the backbone model with the stronger capability to transfer to various downstream tasks for both Transformers and CNNs.
1 INTRODUCTION
Supervised deep learning with large-scale annotated data has witnessed an explosion of success in computer vision (CV) (Krizhevsky et al., 2012a; He et al., 2016) and natural language processing (NLP) (Vaswani et al., 2017). However, a large number of high-quality annotations are not always available in real-world applications. Learning representations without supervision by leveraging pre-text tasks has become increasingly popular.
In CV, early self-supervised learning approaches (Zhang et al., 2016; Doersch et al., 2015; Gidaris et al., 2018) aim to capture invariant features through predicting transformations applied to the same image. However, these methods rely on vision ad-hoc heuristics, and the learned representations are less generic for downstream tasks. Recently, contrastive learning-based approaches (Tian et al., 2020; Chen et al., 2020b; He et al., 2020) have witnessed significant progress, even outperforming supervised methods on several downstream tasks (Chen et al., 2020c; Grill et al., 2020; Zbontar et al., 2021). More recently, inspired by masked autoencoding methods (Radford et al., 2018; Devlin et al., 2018) in NLP, Masked Image Modeling (MIM) methods (Bao et al., 2022; He et al., 2022; Wei et al., 2021; Xie et al., 2021b) have brought about new advances for self-supervised pre-training on CV tasks. The transition from human language understanding to NLP masked autoencoding is quite natural because the filling of missing words in a sentence requires relatively comprehensive semantic understanding. In analogy, humans can understand and imagine masked content by visually filling the missing structures in an image containing occluded parts.
Different from contrastive learning, which yields a clustering effect from pre-training by pulling similar samples and pushing away dissimilar samples, MIM pre-training methods have not been extensively explored in the context of the expected knowledge learned or how this knowledge is acquired. Existing works mainly focus on improving downstream tasks performance via explicit design such as trying different prediction target (Wei et al., 2021), adopting pre-trained tokenizer (Zhou et al., 2021), utilizing complex Transformer decoder (He et al., 2022) or combining with contrastive learning (El-Nouby et al., 2021). Moreover, the success of existing MIM methods is largely confined to Vision Transformer (ViT) structures (Dosovitskiy et al., 2021) since it leads to under-performing performance to directly apply mask token (Devlin et al., 2018) and positional embedding to CNNs.
In this work, we carry out systematic experiments and show that MIM as a pre-training task essentially teaches the model to learn better middle-order interactions between patches for more generalized feature extraction regardless of the underlying network structure. Compared to the local texture features learned by low-order interactions between patches, more complex features such as shape and edge could be extracted via middle-order interactions among patches. The interaction of patches could be considered as information fusion via both the convolution operation of a CNN and the self-attention mechanism of a Transformer. That is to say, CNN and Transformer should both benefit from better middle-order interactions with MIM as the pre-text task.
To bridge the gap of MIM in terms of network architectures based on our extensive experimental analysis, we propose an Architecture-Agnostic Masked Image Modeling framework (A2MIM) that focuses on enhancing the middle-order interaction capabilities of the network. Specifically, we mask the input image with the mean RGB value and place the mask token at intermediate feature maps of the network. In addition, we propose a loss in the Fourier domain to further enhance the middle-order interaction capability of the network. Our contributions are summarized as follows:
• We conducted systematic experiments and showed the essence of MIM is to better learn middle-order interactions between patches but not reconstruction quality.
• We proposed a novel MIM-based framework dubbed A2MIM that bridges the gap between CNNs and Transformers. We are also the first to perform MIM on CNNs without adopting designs native to ViTs that outperforms contrastive learning counterparts.
• Extensive experiments with both Transformers and CNNs on ImageNet-1K and public benchmarks for various downstream tasks show that our method achieves performance improvement on pre-trained representation quality than state-of-the-art methods.
2 RELATED WORK
Contrastive Learning. Contrastive learning learns instance-level discriminative representations by extracting invariant features over distorted views of the same data. MoCo (He et al., 2020) and SimCLR (Chen et al., 2020b) adopted different mechanisms to introduce negative samples for contrast with the positive. BYOL (Grill et al., 2020) and its variants (Chen & He, 2020; Chen et al., 2021) further eliminate the requirement of negative samples to avoid representation collapse. Besides pairwise contrasting, SwAV (Caron et al., 2020) clusters the data while enforcing consistency between multi-augmented views of the same image. Barlow Twins (Zbontar et al., 2021) proposed to measure the cross-correlation matrix of distorted views of the same image to avoid representation collapsing. Meanwhile, some efforts have been made on top of contrastive methods to improve pre-training quality for specific downstream tasks (Xie et al., 2021a; Xiao et al., 2021; Selvaraju et al., 2021; Wu et al., 2022). MoCo.V3 (Chen et al., 2021) and DINO (Caron et al., 2021) adopted ViT (Dosovitskiy et al., 2021) in self-supervised pre-training to replace CNN backbones.
Autoregressive Modeling. Autoencoders (AE) are a typical type of network architecture that allows representation learning with no annotation requirement (Hinton & Zemel, 1993). By forcing denoising property onto the learned representations, denoising autoencoders (Vincent et al., 2008; 2010) are a family of AEs that reconstruct the uncorrected input signal with a corrupted version of the signal as input. Generalizing the notion of denoising autoregressive modeling, masked predictions attracted the attention of both the NLP and CV communities. BERT (Devlin et al., 2018) performs masked language modeling (MLM) where the task is to classify the randomly masked input tokens. Representations learned by BERT as pre-training generalize well to various downstream tasks. For CV, inpainting tasks (Pathak et al., 2016) to predict large missing regions using CNN encoders and colorization tasks (Zhang et al., 2016) to reconstruct the original color of images with removed color channels are proposed to learn representation without supervision. With the introduction of Vision Transformers (ViT) (Dosovitskiy et al., 2021; Liu et al., 2021), iGPT (Chen et al., 2020a) predicts succeeding pixels given a sequence of pixels as input. MAE (He et al., 2022) and BEiT (Bao et al., 2022) randomly mask out input image patches and reconstruct the missing patches with ViTs. Compared to MAE, MaskFeat (Wei et al., 2021) and SimMIM (Xie et al., 2021b) adopt linear layers as the decoder instead of another Transformer as in MAE. MaskFeat applied HOG as the prediction target instead of the RGB value. Other research endeavors (El-Nouby et al., 2021; Zhou et al., 2021; Assran et al., 2022; Akbari et al., 2021; Sameni et al., 2022) combine the idea of contrastive learning
(CL) with MIM. SplitMask (El-Nouby et al., 2021) proposed to use half of the image pixels to predict the other half while applying InfoNCE loss (Van den Oord et al., 2018) across the corresponding latent features. MSN (Assran et al., 2022) matches the representation of an image view containing randomly masked patches and the original unmasked image. Similarly, iBOT (Zhou et al., 2021) adopts the Siamese framework to combine self-distillation with MIM. Moreover, Data2Vec (Baevski et al., 2022) proposed a framework that applies the masked prediction idea for either speech, NLP, or CV. However, most MIM works are confined to ViT architectures, recently proposed CIM (Fang et al., 2022) adopts the output of a pre-trained tokenizer as target and takes the prediction of a frozen BEiT as input to the encoder as a workaround to enable MIM on CNNs. In this work, we propose A2MIM with no components native to ViTs adopted to perform MIM with ViTs and CNNs.
3 INTRIGUING PROPERTIES OF MASKED IMAGE MODELING
3.1 IS MIM BETTER IMAGE AUGMENTATION?
Compared to CNN, Transformers gain tremendous performance improvement with carefully designed image augmentation techniques such as RandAug(Cubuk et al., 2020), CutMix(Yun et al., 2019) and random erasing(Zhong et al., 2020). Random erasing(Zhong et al., 2020) randomly removes part of the image and replaces it with Gaussian noise, while Cutmix randomly removes part of the image and replaces the corresponding region with a patch from another image. Similarly, as in most MIM pre-training tasks, some image patches are masked out and replaced with a learnable mask token. Noticing the resemblance of the masking operations, we hypothesize that MIM as a pre-training task and masking-based data augmentations enhance the network’s robustness towards occlusion, enabling the network with a more generalized feature extraction ability. To verify our hypothesis, we design an occlusion robustness test. Let x ∈ R3×H×W be an input image and y ∈ RC be its corresponding label, where C is the class number. Consider a classification task y = f(x) where f denotes a neural network, the network is considered robust if the network outputs the correct label given an occluded version of the image x′, namely y = f(x′). For occlusion, we consider the patch-based random masking as adopted in most MIM works (He et al., 2022; Xie et al., 2021b; Wei et al., 2021). In particular, we split the image of size 224× 224 into patch size 16× 16 and randomly maskM patches out of the total number ofN patches. The occlusion ratio could then be defined as MN . We conduct experiments on ImageNet-100 (IN-100) (Krizhevsky et al., 2012b) for both Transformer and CNN with different settings. We choose ViT-S (Dosovitskiy et al., 2021) and ResNet-50(He et al., 2016) as the network architecture. Robustness is compared under the following settings: (i) random weight initialization with no image augmentation applied; (ii) random weight initialization with different image augmentations applied; (iii) MIM pre-training as weight initialization with and without image augmentations applied. In Fig. 1, we report the average top-1 accuracy across five runs trained with different settings under various occlusion ratios. Fig. 1(a) and 1(b) show that both MIM and patch-removing alike augmentations significantly improve model occlusion robustness for both ViT-S and ResNet-50. Nevertheless, MIM yields more robust feature extraction than adopting augmentations. Although MIM and patch-removing alike augmentations share similar masking mechanisms, MIM explicitly forces the model to learn the interactions between patches in order to reconstruct missing patches enabling more robust feature extraction. Comparing Fig. 1(a) and 1(b), the convex trend of accuracy from ViT-S indicates better robustness than the concave trend from ResNet-50. The self-attention mechanism of ViTs is able to model the interactions between patches with high degrees of freedom compared to CNNs constrained by convolution priors. We claim that
the success of MIM on ViTs can be seen as resonance in terms of better patch interactions imposed by MIM while supported by the self-attention mechanism of ViTs.
3.2 MIDDLE-ORDER INTERACTIONS FOR GENERALIZED FEATURE EXTRACTION
Next, we show that MIM essentially enables better middle-order interactions between patches. Note that existing MIM works adopt a medium or high masking ratio (Xie et al., 2021b; He et al., 2022) (e.g., 60% or 70%, see Fig. 2) during pre-training, and in these settings, the pairwise interactions between patches are under a middle-size context measured by the order m. Early inpainting work based on CNN (Pathak et al., 2016) resembles MIM but attracts little attention due to the much inferior performance to contrastive learning methods. The inpainting task adopts the masking strategy as illustrated in Fig. 1(c), which masks a full large region instead of random small patches. Such masking mechanisms ignore patch interaction and focus only on reconstruction leading to poor, learned representation quality. To investigate whether MIM makes the model more sensitive to patch interactions of some particular orders, we resort to the tool of multi-order interactions introduced by (Deng et al., 2022; Zhang et al., 2020). Intuitively, mth-order interactions of patches refer to inference patterns (deep features) induced from m number of patches of the original image in the input space. With a small value of m (low-order interactions), the model simply learns local features such as texture. Formally, the multi-order interaction I(m)(i, j) is to measure the order of interactions between patches i and j. We define I(m)(i, j) to be the average interaction utility between patches i and j on all contexts consisting of m patches. m indicates the order of contextual complexity of the interaction. Mathematically, given an input image x with a set of n patches N = {1, . . . , n} (e.g., an image with n pixels), the multi-order interaction I(m)(i, j) is defined as:
I(m)(i, j) = ES⊆N\{i,j},|S|=m[∆f(i, j, S)], (1)
where ∆f(i, j, S) = f(S ∪ {i, j}) − f(S ∪ {i}) − f(S ∪ {j}) + f(S). f(S) indicates the score of output with patches in N \ S kept unchanged but replaced with the baseline value (Ancona et al., 2019), where the context S ⊆ N . See Appendix B.2 for details. To measure the interaction complexity of the neural network, we measure the relative interaction strength J (m) of the encoded m-th order interaction as follow:
J (m) = Ex∈ΩEi,j |I(m)(i, j|x)|
Em′Ex∈ΩEi,j |I(m ′ )(i, j|x)|
, (2)
where Ω is the set of all samples and 0 ≤ m ≥ n − 2. J (m) is the average value over all possible pairs of patches of input samples. J (m) is normalized by the average value of all interaction strengths. J (m) then indicates the distribution (area under curve sums up to one) of the order of interactions of the network. In this work, we use J (m) as the metric to evaluate and analyze interaction orders of the network with MIM pre-training. We conduct experiments on IN-100 with image size 224× 224 and use ViT-S (Dosovitskiy et al., 2021) and ResNet-50 (He et al., 2016) as the network architecture. We consider a patch of size 16× 16 as an input patch. For the computation of J (m), we adopt the sampling solution following previous works (Deng et al., 2022; Zhang et al., 2020). As can be seen from Fig. 1(c) that ViT-S with random weight initialization tends to learn simple interactions with few patches (e.g., less than 0.05n patches) while MIM pre-trained models show a stronger interaction for relative middle-order (from 0.05n to 0.5n). Similarly, as observed from 1(d), MIM pre-trained
ResNet-50 enhances the middle-order interactions from 0.1n to 0.55n compared to random initialized models. Stronger middle-order interactions form more complex features such as shape and edge compared to local texture features learned from low-order interactions (Naseer et al., 2021).
4 APPROACH
We propose a generic MIM framework following two design rules: (a) No complex or non-generic designs are adopted to ensure compatibility with all network architectures. (b) Better middleorder interactions between patches for more generalized feature extraction. Figure 3 highlights the difference between our proposed framework and existing MIM frameworks in terms of three key components: masking strategy, encoder/decoder architecture design and prediction targets.
4.1 ARCHITECTURE AGNOSTIC FRAMEWORK
Mask Where Middle-order Interactions Occur. Existing works (El-Nouby et al., 2021; He et al., 2022; Xie et al., 2021b; Wei et al., 2021) adopt the masking strategy where the input image is divided into non-overlapping patches, and a random subset of patches are masked. MAE utilizes a Transformer as a decoder and takes only the visible patches into the encoder. Masked tokens are appended to the decoder to reconstruct the masked patches. SimMIM (Xie et al., 2021b) and MaskFeat (Wei et al., 2021) utilize a fully connected layer as the decoder and feed the mask token into the encoder together with the visible patches. The mask token (Devlin et al., 2018) is a token-shared learnable parameter that indicates the presence of missing patches to be predicted. Despite different choices of decoder structures, the mask token is either placed at the input to the encoder or the decoder. Mathematically, the masking process of MIM is defined as xmask = x (1−M) + T M , where M is the random occlusion mask, and T represents the learnable mask token. Such masking at the patch embedding layer aligns with the attention mechanism of Transformers, which is robust against occlusion. However, masking at the stem layer undermines the context extraction capability of CNN, which relies on local inductive biases. Moreover, masking at input stages of the network leads to loworder interactions. Thus, we propose to mask intermediate features where the output feature contains both the semantic and spatial information and the mask token can encode interactions with the medium number of tokens. More concretely, our masking operation is defined as zlmask = z
l + T D(M), where zl is the intermediate feature map of x at layer-l in the Transformer encoder (or for stage-l in CNNs) and D(·) is the corresponding down-sampling function of the occlusion mask.
Filling Masked Tokens with RGB Mean. It is worth noting that existing works directly replace the occluded patches with the mask token in the input space or after the patch embedding (Bao et al., 2022; Xie et al., 2021b). In contrast, we use the average RGB value to fill the occluded patches as the input to the encoder and add the mask token onto the intermediate feature maps of the encoder. The masking mechanism originates from NLP where languages are of high-level semantics and do not require low-level feature extraction as image processing. The introduction of a zero mask at the early stages of the network where low-level feature extraction happens is harmful in terms of feature extraction. From the view of Fourier domain, the RGB mean is the DC component of images. It not only brings about minimum local statistics variation caused by the masking operation but also forces the network to model rather more informative medium frequencies instead of filling the mask patches with blurry color blocks of low frequencies. The proposed masking strategy is generic to both convolution and self-attention in that it accommodates low-level to semantic-level feature extraction.
4.2 MIDDLE-ORDER INTERACTIONS FROM FOURIER PERSPECTIVE
Current works (El-Nouby et al., 2021; He et al., 2022; Xie et al., 2021b) adopt raw RGB values as the prediction target. However, raw pixels in the spatial domain are heavily redundant and often contain low-order statistics (Bao et al., 2022; Wei et al., 2021; Zhou et al., 2021). MaskFeat (Wei et al., 2021) adopts the Histogram of Oriented Gradients (HOG) as the prediction target outperforming MAE and SimMIM. HOG is a discrete descriptor of medium or high-frequency features which captures shape patterns based on middle-order interactions. ViTs and CNNs have low-pass and high-pass filtering properties, respectively (Park & Kim, 2022; 2021). ViTs and CNNs have certain frequency bands that they each cannot model well, and both cannot model middle-order interactions well (detailed in Appendix B.3). The observation of the medium frequency descriptor HOG improves middle-order interactions and leads to the hypothesis that learning medium frequencies would help the model learn more middle-order interactions. Given a RGB image x ∈ R3×H×W , the discrete Fourier transform (DFT) of each channel is defined as:
F(u,v) = h=H∑ h=1 w=W∑ w=1 x(h,w)e−2πj( uh H + vw W ). (3)
In addition to the common MIM loss in the spatial domain Lspa, we propose Lfreq in Fourier domain:
Lfreq = c=3∑ c=1 u=H∑ u=1 w=W∑ w=1 ω(u, v) ∥∥DFT(xpredc M + de(xpredc ) (1−M))−DFT(xc)∥∥ , (4)
where xpred is the predicted image, de(·) is detach gradient operation, and ω(u, v) is the frequency weighting matrix. ω(u, v) enables both ViTs and CNNs to model features of medium frequencies rather than local textures and noise corresponding to high frequencies. Inspired by Focal Frequency loss (Jiang et al., 2021), we define adaptive ω(u, v) as follows:
ω(u, v) = ∥∥DFT(xpredc M + det(xpredc ) (1−M))−DFT(xc)∥∥α , (5)
where α is a scaling factor, and we set α = 1. Fig. B.3 verifies that Eq. (5) allows the model to learn previously ignored frequencies (mostly the medium frequency components). Note that Lfreq introduces negligible overhead by using Fast Fourier Transform (FFT) algorithms with O(n log n) complexity to achieve DFT. The overall loss function of A2MIM is then defined as:
L = Lspa + λLfreq, (6) where Lspa = ∥∥xpred − x∥∥ M and λ is a loss weighting parameter. We set λ to 0.5 by default.
5 EXPERIMENTS
5.1 PRE-TRAINING SETUP
We adopt ResNet-50 (He et al., 2016) and Vision Transformer (Dosovitskiy et al., 2021) (ViTS/16 and ViT-B/16) as the backbone. We pre-train on ImageNet-1K (IN-1K) training set with AdamW (Loshchilov & Hutter, 2019) optimizer with a basic learning rate of 1.5× 10−4 adjusted by
a cosine learning rate scheduler and a batch size of 2048. The input image size is 224× 224 with a patch size of 32× 32. We use a random masking ratio of 60%. By default, the learnable mask tokens are placed at stage-3 in ResNet-50 and layer-5/layer-8 in ViT-S/ViT-B, respectively. We adopt a linear prediction head as the decoder (Xie et al., 2021b). A2MIM+ indicates adopting HOG as supervision and using the MLP decoder with depth-wise (DW) convolution. Our experiments are implemented on OpenMixup (Li et al., 2022) by Pytorch and conducted on workstations with NVIDIA V100 GPUs. We report the average results of 3 trials for all experiments and use bold and underline to indicate the best and the second-best performance. See Appendix A for detailed pre-training settings.
5.2 IMAGE CLASSIFICATION ON IMAGENET-1K
Evaluation Protocols. We first evaluate the learned representation by end-to-end fine-tuning (FT) and linear probing (Lin.) protocols on IN-1K. For evaluation on CNN, we adopt RSB A2/A3 (Wightman et al., 2021) training settings for fine-tuning on ResNet-50, which employs LAMB (You et al., 2020) optimizer with a cosine scheduler for 300/100 epochs. For the linear probing setting on ResNet-50, we freeze the backbone features and train a linear classifier with an initial learning rate of 30 and batch size of 256 following MoCo (He et al., 2020). For evaluation on Transformer, we employ the fine-tuning as MAE (He et al., 2022), which uses DeiT (Touvron et al., 2021) augmentation setting, an AdamW optimizer for 100-epoch training, and adopt a layer-wise learning rate decay of 0.65 following (Bao et al., 2022). See Appendix A for detailed evaluation configurations.
ResNet-50. We compare the proposed A2MIM with classical self-supervised learning methods (Inpainting (Pathak et al., 2016), Relative-Loc (Doersch et al., 2015), and Rotation (Gidaris et al., 2018)), contrastive learning (CL), and MIM methods with 100/300 pre-training epochs. We modified MIM methods to run them on ResNet-50: the learnable mask token is employed to the encoder of BEiT (Bao et al., 2022), Data2Vec (Baevski et al., 2022), and SimMIM (Xie et al., 2021b) after the
Table 3: Performance of object detection and semantic segmentation tasks based on ResNet50 on COCO and ADE20K.
Method Epochs COCO ADE-20K APbox APmask mIoU PyTorch (Sup.) 120 38.2 33.3 36.1 SimCLR 800 37.9 33.3 37.6 MoCoV2 400 39.2 34.3 37.5 BYOL 400 38.9 34.2 37.2 SwAV 800 38.4 33.8 37.3 SimSiam 400 39.2 34.4 37.2 Balow Twins 800 39.2 34.3 37.3 SimMIM‡ 300 39.1 34.2 37.4 CIM 300 - - 38.0 A2MIM 300 39.8 34.9 38.3
Table 4: Performance of object detection and semantic segmentation tasks based on ViT-B on COCO and ADE-20K.
Method Supervision Epochs COCO ADE-20K APbox APmask mIoU DeiT (Sup.) Label 300 47.9 42.9 47.0 MoCoV3 CL 300 47.9 42.7 47.3 DINO CL 400 46.8 41.5 47.2 BEiT DALLE 300 43.1 38.2 47.1 iBOT Momentum 400 48.4 42.7 48.0 MAE RGB 1600 48.5 42.8 48.1 MaskFeat HoG 800 49.2 43.2 48.8 SimMIM RGB 800 48.9 43.0 48.4 CAE DALLE 800 49.2 43.3 48.8 A2MIM RGB 800 49.4 43.5 49.0
stem (the output feature of 56× 56 resolutions); the encoder of MAE randomly selects 25% from 56× 56 output features of the stem as unmasked patches and takes the reorganized 28× 28 patches as the input of four stages. As shown in Tab. 1, our approach achieves competitive performance with state-of-the-art contrastive-based methods under 100-epoch RSB A3 fine-tuning. Note that MIM methods see fewer training samples per epoch than CL methods (40% vs. 200% of patches) and usually require longer pre-training epochs. Based on a longer fine-tuning evaluation using RSB A2, our method (300-epoch) outperforms contrastive-based methods with even fewer training epochs. Meanwhile, A2MIM also improves the baseline SimMIM† (+0.8%) and the concurrent work CIM (+0.4%) in terms of RSB A3 fine-tuning for the longer pre-training. Besides, we also report the linear probing accuracy in the fast pre-training for reference, although our main focus is to learn representations with better fine-tuning performances. The linear probing performance of our method is lower than contrastive-based methods, it still improves the baseline by 0.6%.
ViT. We then evaluate A2MIM based on ViT-S/B in Tab. 2. We list the supervision target used by various pre-training methods in the second column of Tab. 2. DALL-E (Ramesh et al., 2021) and VQGAN (Esser et al., 2021) are pre-trained image tokenizers, while momentum refers to the momentum encoder. Our approach outperforms current state-of-the-art methods with complex supervision, e.g., SplitMask (MIM with CL combined), iBOT (complex teacher-student architecture), and CIM (pre-trained BEiT as supervision). Based on ViT-S/B, A2MIM improves the baseline SimMIM by 0.5%/0.4% with RGB as supervision and 0.7%/0.7% with the HOG feature as supervision.
5.3 TRANSFER LEARNING EXPERIMENTS
Object detection and segmentation on COCO. To verify the transferring abilities, we benchmark CL and MIM methods on object detection and segmentation with COCO (Lin et al., 2014). For evaluation on CNN, we follow the setup in MoCo, which fine-tunes Mask R-CNN (He et al., 2017) with ResNet-50-C4 backbone using 2× schedule on the COCO train2017 and evaluates on the COCO val2017. Results in Tab. 3 indicate that our approach (300-epoch) outperforms contrastivebased methods with longer pre-training (+0.7% APbox and +0.6% APmask). For evaluation on Transformer, we follow MAE and CAE, which efficiently fine-tunes Mask R-CNN with ViT-B backbone using 1× schedule. In Tab. 4, our approach (800-epoch) is superior to popular contrastivebased and MIM methods, e.g., outperforms MAE (1600-epoch) by 0.9% APbox and 0.8% APmask.
Semantic segmentation on ADE20K. We then evaluate the transferring performances on semantic segmentation with ADE20K (Zhou et al., 2019) by fine-tuning UperNet (Xiao et al., 2018). Based on ResNet-50, all CNN models are fine-tuned for 160K iterations with SGD following MoCo. Results in Tab. 3 show that our method outperforms CL methods by at least 0.9% mIoU and outperforms CIM (required extra pre-trained BEiT (Bao et al., 2022)) by 0.3% mIoU. Based on ViT-B, we fine-tune models for 80K iterations with AdamW following MAE. Tab. 4 shows that our approach consistently improves MIM methods (e.g., outperforms MAE and SimMIM by 0.9% and 0.6% mIoU).
5.4 ABLATION STUDY
We next verify the effectiveness of the proposed components. Ablation studies are conducted with ResNet-50 and ViTs on IN-100 and IN-1K using the fine-tuning protocol. Based on the modified baseline SimMIM (Lspa), we first compare different mask token mechanisms: Replacing denotes the original way in most MIM methods, and Addition denotes our proposed way that adds the mask token to intermediate feature maps of the backbone. As shown in Fig. 5, adding the mask token to the medium stages (stage-3) or layers (layer-5) yields the best performance. Replacing masked patches in input images by RGB mean value slightly improves the baseline SimMIM, especially for ResNet-50 (88.19 vs. 87.75 on IN-100). Then, we verify the proposed Lfreq in Tab. 5. We find that simply using Lfreq without the adaptive re-weighting ω (Eqn. 5) brings limited improvements as the frequency constraint to Lspa, while employing ω further enhances performances by helping the model to learn more informative frequency components. Additionally, we visualize reconstruction results in Fig. 4 to show the improvements brought by our proposed components (more results in Appendix B).
5.5 VERIFICATION OF A2MIM DESIGN RULES Table 6: Analysis of the scalability A2MIM for advanced components on IN-1K.
Module ResNet-50 ViT-B Linear 78.8 82.4
2-layer MLP 78.8 82.4 Decoder 2-layer MLP (w/ DW) 78.9 82.5
2-layer Transformer 78.6 82.3 2-layer Transformer (w/ DW) 78.8 82.6
RGB 78.8 82.4 Target HoG Feature 78.9 82.6
DINO Feature 78.9 82.7 We verify whether A2MIM meets the intended design rules using the same experiment settings as Sec. 5.4: (i) A2MIM is generic to incorporate advanced components proposed in previous works (e.g., complex decoders, advanced prediction targets). As for the decoder structure, we replace the original linear decoder with 2-layer MLP or Transformer decoders, but find limited improvements or degenerated performances (similar to SimMIM) in Tab. 6. Inspired by PVT.V2 (Wang et al., 2022), we introduce a depth-wise (DW) convolution layer (w/ DW) to the MLP decoder (adding a 5×5 DW layer in between) and the Transformer decoder (adding a 3× 3 DW layer in each FFN (Wang et al., 2022)), which brings improvements compared to the linear decoder. As for the prediction target, we follow MaskFeat to change the RGB target to the HoG feature or the output feature from ViT-B/16 pre-trained 1600-epoch by DINO (Caron et al., 2021). Tab. 6 shows that using advanced targets significantly improves the performance of A2MIM for both ResNet-50 and ViT-B. Therefore, we can conclude A2MIM is a generally applicable framework. (ii) A2MIM enhances occlusion robustness and middle-order interaction among patches from experiments on ImageNet-1K in Fig. A3.
6 CONCLUSION
In this paper, we delved deep into MIM and answered the question of what exactly is learned during MIM pre-training. We adopted multi-order interactions to study the interaction order among image patches. We discovered that MIM essentially teaches the network to learn middle-order interactions among image patches for more complex feature extraction regardless of the network architecture. Based on our findings, we further proposed a general framework A2MIM that is compatible with both Transformers and CNNs for MIM tasks aiming at enhancing patch interactions during self-supervised pre-training. Besides a different mask token mechanism, we proposed a loss in the Fourier domain to better learn the middle-order interaction. Experimental results have shown that our proposed framework improves the representations learned for both CNNs and Transformers yielding superior performance than state-of-the-arts on various downstream tasks.
A DETAILS OF COMPARISON EXPERIMENTS
This section provides experimental details for Sec. 5, e.g., pre-training and evaluation on ImageNet-1K and transfer learning settings on downstream tasks.
A.1 IMAGENET-1K EXPERIMENTS
Pre-training. The default settings of A2MIM for ResNet-50 and ViTs are provided in Tab. A1, following SimMIM (Xie et al., 2021b). We use AdamW (Loshchilov & Hutter, 2019) optimizer with the cosine scheduler and the linear learning rate scaling rule (Goyal et al., 2020): lr = base lr×batchsize / 256. Similar to current MIM methods, we only use RandomResizedCrop with the scale of (0.67, 1.0) and do not employ other complex augmentations (e.g., Rand Augment (Cubuk et al., 2020), mixups (Yun et al., 2019), or stochastic depth) during pre-training. As for ViTs, we adopt Cosine decay for 100 and 300 epochs pre-training while using Step decay (the learning rate multiplied 0.1 at 700-epoch) for 800-epoch pre-training.
End-to-end fine-tuning. Our fine-tuning settings follow common practices of supervised image classification on ImageNet-1K. As shown in Tab. A2, we fine-tune pre-trained ViTs for 100 epochs using the DeiT (Touvron et al., 2021) training recipe, which employs AdamW (Loshchilov & Hutter, 2019) optimizer with the cross-entropy (CE) loss; we fine-tune pre-trained ResNet-50 for 100/300 epochs using RSB A3/A2 (Wightman et al., 2021) settings, which employs LAMB (You et al., 2020) optimizer with the binary cross-entropy (BCE) loss. Additionally, we use layer-wise learning rate decay as (Bao et al., 2022) for fine-tuning ViT models.
Table A1: ImageNet-1K A2MIM pre-training settings for ResNet-50 and ViT models.
Configuration ResNet-50 ViTs Pre-training resolution 224× 224 224× 224 Mask patch size 32× 32 32× 32 Mask ratio 60% 60% Optimizer AdamW AdamW Base learning rate 1.5× 10−4 1× 10−4 Weight decay 0.05 0.05 Optimizer momentum β1, β2=0.9, 0.999 β1, β2=0.9, 0.999 Batch size 2048 2048 Learning rate schedule Cosine Cosine / Step Warmup epochs 10 10 RandomResizedCrop 3 3 Rand Augment 7 7 Stochastic Depth 7 7 Gradient Clipping 7 max norm= 5
Table A2: ImageNet-1K fine-tuning recipes for ResNet-50 (RSB A2/A3) and ViTs (DeiT).
Configuration ViTs ResNet-50 DeiT RSB A2 RSB A3 FT epochs 100 300 100 Training resolution 224 224 160 Testing resolution 224 224 224 Testing crop ratio 0.875 0.95 0.95 Optimizer AdamW LAMB LAMB Base learning rate 2.5× 10−4 1.5× 10−3 1× 10−3 Weight decay 0.05 0.02 0.02 Batch size 1024 2048 2048 Learning rate schedule Cosine Cosine Cosine Warmup epochs 5 5 5 Label smoothing 0.1 7 7 Stochastic depth 0.1 0.05 7 Gradient clipping 5.0 7 7 Rand Augment (9, 0.5) (7, 0.5) (6, 0.5) Mixup alpha 0.8 0.1 0.1 CutMix alpha 1.0 1.0 1.0 Loss function CE loss BCE loss BCE loss
A.2 OBJECT DETECTION AND SEGMENTATION ON COCO
We adopt Mask-RCNN (He et al., 2017) framework to perform transfer learning to object detection and segmentation on COCO (Lin et al., 2014) in Detectron21. For evaluation on ResNet-50, we follow MoCo (He et al., 2020) and fine-tune Mask R-CNN with the pre-trained ResNet-50-C4 backbone using 2× schedule (24 epochs). For evaluation of ViTs, we follow MAE (He et al., 2022), which employs the pre-trained ViT backbone and an FPN neck (Lin et al., 2017) in Mask R-CNN, and fine-tune the model using 1× schedule (12 epochs). For a fair comparison, we follow (Bao et al., 2022; Xie et al., 2021b) to turn on relative position bias in ViT (Dosovitskiy et al., 2021) during both pre-training and transfer learning, initialized as zero.
A.3 SEMANTIC SEGMENTATION ON ADE-20K
We adopt UperNet (Xiao et al., 2018) to perform transfer learning to semantic segmentation on ADE-20K and use the semantic segmentation implementation in MMSegmentation2. We initialize
1https://github.com/facebookresearch/detectron2 2https://github.com/open-mmlab/mmsegmentation
0 10 20 30 40 50 60 70 80 90 100
Occlusion ratio (%)
0
20
40
60
80
To p
1 Ac
cu ra
cy (%
)
ViT S Random PatchDrop
BYOL + FT (DeiT) MoCoV3 + FT (DeiT) MoCoV3 + FT (Vanilla) MAE + FT (DeiT) SimMIM + FT (DeiT) SimMIM + FT (Vanilla)
(a)
0 10 20 30 40 50 60 70 80 90 100
Occlusion ratio (%)
0
20
40
60
80
To p
1 Ac
cu ra
cy (%
)
ResNet50 Random PatchDrop BYOL + FT (DeiT) BYOL + FT (Vanilla) MoCoV3 + FT (DeiT) SimMIM + FT (DeiT) SimMIM + FT (Vanilla)
(b)
0 0.1 0.3 0.5 0.7 0.9 1.0 Order / n
0
1
2
3
4
5
6
In ter
ac tio
n St
re ng
th ViT S Interaction Strength of Order BYOL + FT (Vanilla) MoCoV3 + FT (Vanilla) MAE + FT (Vanilla) SimMIM + FT (Vanilla)
(c)
0 0.1 0.3 0.5 0.7 0.9 1.0 Order / n
0.0
0.5
1.0
1.5
2.0
2.5
3.0
3.5
In ter
ac tio
n St
re ng
th
ResNet50 Interaction Strength of Order BYOL + FT (Vanilla) MoCoV3 + FT (Vanilla) SimMIM + FT (Vanilla)
(d)
Figure A1: (a)(b): Robustness against different occlusion ratios of images (CL vs. MIM) is studied for both ViT-S and ResNet-50 on ImageNet-100. (c)(d): Distributions of the interaction strength J (m) (CL vs. MIM) are explored for both ViT-S and ResNet-50 on ImageNet-100. The label indicates the pre-training method + fine-tuning augmentation used, random stands for random weight initialization.
the UperNet using the pre-trained backbones (ResNet-50 or ViTs) on ImageNet-1K. For ViTs, we fine-tune end-to-end for 80K iterations by AdamW with a batch size of 16. We search a optimal layerwise decay from {0.8, 0.9} and search optimal a learning rate from {1× 10−4, 2× 10−4, 3× 10−4} for all competitors. Similar to fine-tuning settings on COCO, we use relative position bias in ViT (Dosovitskiy et al., 2021) during both pre-training and transfer learning as (Bao et al., 2022; Xie et al., 2021b). For ResNet-50, we follow MoCo (He et al., 2020), i.e., all CNN models are fine-tuned for 160K iterations by SGD with the momentum of 0.9 and a batch size of 16.
B EMPIRICAL EXPERIMENTS
This section provides background information and experimental details for Sec. 3. We also provide additional results of occlusion robustness evaluation and multi-order interaction strength.
B.1 OCCLUSION ROBUSTNESS
In Sec. 3.1, we analyze robustness against occlusion of fine-tuned models on ImageNet-100 (a subset on ImageNet-1K divided by (Tian et al., 2020)) using the official implementation3 provided by Naseer et al. (2021). Both MIM and contrastive-based methods are pre-trained 400 epochs on ImageNet-100 using their pre-training settings on ImageNet-1K. We adopt the fine-tuning training recipe as DeiT in Tab. A2 and use the same setting (100-epoch) for both ViT-S and ResNet-50. Note that we use the modified SimMIM for ResNet-50 (replacing masked patches in the input image with the RGB mean) in all experiments.
As shown in Fig. 1 and A1, we compared MIM pre-trained models supervised methods with various augmentations and contrastive learning pre-trained methods in terms of the top-1 accuracy under various occlusion ratios. We find that MIM methods show better occlusion robustness on both Transformers and CNNs. In addition to Sec. 3.1, we also provide results of salient occlusion for ViT-S and ResNet-50 on ImageNet-100 in Fig. A2. Note that the occlusion ratio means the ratio of dropped and total patches and we plot the mean of accuracy across 3 runs. We can conclude that MIM pre-trained models have stronger robustness against random and salient occlusions than supervised and contrastive-based methods.
B.2 MULTI-ORDER INTERACTION
In Sec. 3.2, we interpret what is learned by MIM by multi-order interaction (Deng et al., 2022; Zhang et al., 2020). The interaction complexity can be represented by I(m)(i, j) (defined in Eqn. 1), which measures the average interaction utility between variables i, j on all contexts consisting ofm variables. Notice that the order m reflects the contextual complexity of the interaction I(m)(i, j). For example, a low-order interaction (e.g., m = 0.05n) means the relatively simple collaboration between variables i, j, while a high-order interaction (e.g., m = 0.05n) corresponds to the complex collaboration. As figured out in the representation bottleneck (Deng et al., 2022), deep neural networks (DNNs) are more likely to encode both low-order interactions and high-order interactions, but often fail to learn middle-order interactions. We hypothesize that MIM helps models learn more middle-order
3https://github.com/Muzammal-Naseer/Intriguing-Properties-of-Vision-Tra nsformers
0 10 20 30 40 50 60 70 80 90 100
Occlusion ratio (%)
0
20
40
60
80
To p
1 Ac
cu ra
cy (%
)
ViT S Random PatchDrop
BYOL + FT (DeiT) MoCoV3 + FT (DeiT) MAE + FT (DeiT) SimMIM + FT (DeiT) Random + FT (DeiT) Random + FT (Vanilla)
(a)
0 10 20 30 40 50 60 70 80 90 100
Occlusion ratio (%)
0
20
40
60
80
To p
1 Ac
cu ra
cy (%
)
ViT S Salient PatchDrop BYOL + FT (DeiT) MoCoV3 + FT (DeiT) MAE + FT (DeiT) SimMIM + FT (DeiT) Random + FT (DeiT) Random + FT (Vanilla)
(b)
0 10 20 30 40 50 60 70 80 90 100
Occlusion ratio (%)
0
20
40
60
80
To p
1 Ac
cu ra
cy (%
)
ResNet50 Random PatchDrop BYOL + FT (DeiT) MoCoV3 + FT (DeiT) SimMIM + FT (DeiT) Random + FT (DeiT) Random + FT (Vanilla)
(c)
0 10 20 30 40 50 60 70 80 90 100
Occlusion ratio (%)
0
20
40
60
80
To p
1 Ac
cu ra
cy (%
)
ResNet50 Salient PatchDrop BYOL + FT (DeiT) MoCoV3 + FT (DeiT) SimMIM + FT (DeiT) Random + FT (DeiT) Random + FT (Vanilla)
(d) Figure A2: Robustness against various random or salient occlusion ratios of images is studied in (a)(b) for ViT-S, and (c)(d) for ResNet-50 using various experimental settings on ImageNet-100. The label indicates the pre-training method + fine-tuning setting used, random stands for random weight initialization.
0 10 20 30 40 50 60 70 80 90 100
Occlusion ratio (%)
0
20
40
60
80
To p
1 Ac
cu ra
cy (%
)
ViT S Random PatchDrop
MoCoV3 + FT (DeiT) A2MIM + FT (DeiT) SimMIM + FT (DeiT) PyTorch + FT (DeiT)
(a)
0 10 20 30 40 50 60 70 80 90 100
Occlusion ratio (%)
0
20
40
60
80
To p
1 Ac
cu ra
cy (%
)
ResNet50 Random PatchDrop BYOL + FT (A3) MoCoV3 + FT (A3) A2MIM + FT (A3) SimMIM + FT (A3) PyTorch + FT (A3)
(b)
0 0.1 0.3 0.5 0.7 0.9 1.0 Order / n
0.0
0.5
1.0
1.5
2.0
2.5
3.0
3.5
In ter
ac tio
n St
re ng
th ViT S Interaction Strength of Order MoCoV3 + FT (DeiT) A2MIM + FT (DeiT) SimMIM + FT (DeiT) PyTorch + FT (DeiT)
(c)
0 0.1 0.3 0.5 0.7 0.9 1.0 Order / n
0.0
0.5
1.0
1.5
2.0
2.5
3.0
3.5
4.0
In ter
ac tio
n St
re ng
th ResNet50 Interaction Strength of Order BYOL + FT (A3) MoCoV3 + FT (A3) A2MIM + FT (A3) SimMIM + FT (A3) PyTorch + FT (A3)
(d) Figure A3: Verification of robustness and interaction of A2MIM with ViT-S and ResNet-50 on ImageNet-1K. (a)(b): Robustness against different occlusion ratios of images is studied for A2MIM and various methods. (c)(d): Distributions of the interaction strength J (m) are explored.
interactions since MIM has a natural advantage in cases where some parts of the image are masked out. In Fig. 1, we calculate the interaction strength J (m) (defined in Eqn. 2) for fine-tuned models on ImageNet-100 using the official implementation4 provided by Deng et al. (2022). Specially, we use the image of 224× 224 resolution as the input and calculate J (m) on 14× 14 grids, i.e., n = 14× 14. And we set the model output as f(xS) = log
P (ŷ=y|xS) 1−P (ŷ=y|xS) given the masked sample xS , where y
denotes the ground-truth label and P (ŷ = y|xS) denotes the probability of classifying the masked sample xS to the true category.
B.3 MIM FROM FREQUENCY PERSPECTIVE
We first plot the log magnitude of Fourier transformed feature maps of ResNet-50 with different pretraining methods using the tools5 provided by Park & Kim (2022) on ImageNet-1K. Following (Park & Kim, 2022), we first convert feature maps into the frequency domain and represent them on the normalized frequency domain (the highest frequency components are at {−π,+π}). In Fig. 4(a), we report the amplitude ratio of high-frequency components by using ∆ log amplitude. As shown in Fig. 4(a), inpainting and MIM show similar low-pass filtering effects at convolution layers as compared to contrastive learning. This indicates that inpainting and MIM reduce noise and uncertainty induced by high-frequency features. We argue that the reconstruction performance of MIM is mainly related to low or high-order interactions of patches (Deng et al., 2022), while reconstruction performance is not directly related to the learned representation quality. Then, we provide the standard deviation of feature maps by block depth as (Park & Kim, 2022; 2021), which first calculates the feature map variance on the last two dimensions and then averages over the channel dimension for the whole dataset. Fig. 4(b) shows the feature variance of each layer of ResNet-50 with different pre-training methods on IN-1K. This figure indicates that MIM tends to reduce the feature map variance, and conversely, supervised training, inpainting, and contrastive learning based on CNN tend to increase variance. Compared to MIM, which learns better middle-order interactions, the inpainting task fails to filter out low-order interactions and thus leads to higher variance. To conclude, MIM methods learn middle-order interactions and reduce the feature map uncertainty (high frequencies) based on the CNN encoder for a generalized and stabilized feature extraction.
4https://github.com/Nebularaid2000/bottleneck 5https://github.com/xxxnell/how-do-vits-work
0.0 0.2 0.4 0.6 0.8 1.0 Normalized Depth
-7.0
-6.0
-5.0
-4.0
-3.0
-2.0
-1.0
Lo g
Am pl
itu de
BYOL MoCoV3 Inpainting SimMIM DeiT (Sup. ) Random
(a)
0.0 0.2 0.4 0.6 0.8 1.0 Normalized Depth
0.00
1.00
2.00
3.00
4.00
Fe at
ur e
M ap
Va ria
nc e
BYOL MoCoV3 Inpainting SimMIM DeiT (Sup. ) Random
(b) Figure A4: (a) Fourier transformed feature maps. The vertical axis is the relative log amplitudes of the high-frequency components, and the horizontal axis is the normalized depth of the network. The blue columns indicate the pooling layers, while the white columns indicate the convolution layers. (b) Feature maps variance. The vertical axis is the average variance value of feature maps. DeiT (Sup.) is supervised pre-training. The results of the randomly initialized network are plotted for reference.
Fo x
Raw image Prediction image
Fourier spectrum Fourier spectrum
w/o
Loss weight (Fourier spectrum) Figure A5: Visualization of predicted images and Lfreq loss weight in Fourier domain. From the view of the Fourier spectrum, the raw image (left) contains 99% low-frequency components (usually present contents) and rich medium-frequency (structural patterns) and high-frequency components (local details and noises), while the predicted result (middle) provides fewer medium or highfrequency components. Calculated in the Fourier domain, the loss weights (right) of Lfreq w/o w help the model to learn the full spectrum while Lfreq focusing on the low and medium-frequency parts, which are more likely to be low-order or middle-order interactions.
C MORE EXPERIMENT RESULTS
C.1 ABLATION OF THE PROPOSED MODULES
In addition to ablation studies in Sec. 5.4, we provide ablation study on the proposed Lfreq in the Fourier domain, as shown in Figure A5. As we discussed in Sec. 4, we hypothesize that learning medium frequencies would help better learn middle-order interactions. we thereby propose Lfreq to tackle the dilemma of Lspa, which tends to learn low-frequency components (i.e., contents reflected by high-order interactions). Although the reconstruction loss in the Fourier domain has a global perception, the high-frequency components are usually constructed by local details and noises (i.e., low-order interactions), which might hurt the generalization abilities. Therefore, we introduce the reweight w(u, v) to force the model to learn more medium-frequency components, which are identical to middle-order interactions. Then, we perform further analysis of the masked patch size for A2MIM in Tab. A3. Note that we pre-train ResNet-50 for 100 epochs and ViT-B for 400 epochs on ImageNet-1K and report the fine-tuning results. As shown in Tab. A3, when the mask ratio is 60%, the optimal masked patch size is 32× 32 for A2MIM, which is the same as SimMIM.
Table A3: Ablation of masked patch size for A2MIM based on ResNet-50 and ViT-B on ImageNet-1K. Model Masked Mask PT Top-1 Accuracy (%)
patch size ratio epoch ResNet-50 8 / 16 / 32 / 64 0.6 100 78.2 / 78.6 / 78.8 / 78.7
ViT-B 8 / 16 / 32 / 64 0.6 400 82.9 / 83.4 / 83.5 / 83.3
C.2 ANALYSIS OCCLUSION ROBUSTNESS AND INTERACTION OF A2MIM
We further analyze occlusion robustness and interaction strength of A2MIM with ViT-S (pre-training 400-epoch) and ResNet-50 (pre-training 100-epoch) on ImageNet-1K, as shown in Fig. A3. Fig. 3(a) and 3(b) shows that A2MIM is more robust to occlusion than the baseline SimMIM and contrastive learning methods with both Transformers and CNNs. Meanwhile, we find that MIM methods learn more balanced interaction strength than both supervised and contrastive learning methods in Fig. 3(c) and 3(d). A2MIM further improves SimMIM by capturing more middle-order interactions (0.2n to 0.6n) with Transformers and CNNs. Therefore, we can conclude that A2MIM helps the model to learn better middle-order interactions between patches for more generalized visual representation.
C.3 SCALING-UP A2MIM
Additionally, we scale up the model size of backbone encoders to verify the performance of A2MIM with ResNet and ViT on ImageNet-1K. As shown in Table A4, our proposed A2MIM and its advanced variant A2MIM+ consistently improve both the contrastive-based and MIM methods on all scale architectures, e.g., A2MIM outperforms SimMIM by 0.5%/0.5%/0.5%/0.2% and 0.6%/0.4% based on ViT-S/B/L/H and ResNet-50/101, demonstrating that A2MIM is an architecture-agnostic and salable framework for MIM pre-training.
Table A4: ImageNet-1K fine-tuning (FT) top-1 accuracy (%) with ResNet (R) and ViT of various model scales. We adopt the 100-epoch fine-tuning protocols for both architectures.
Methods Supervision ViT-S ViT-B ViT-L ViT-H R-50 R-101 Sup. Label 79.9 81.8 82.6 83.1 78.1 79.8 MoCoV3 CL 81.4 83.2 84.1 - 78.7 - DINO CL 81.5 83.6 - - 78.7 - MAE RGB - 83.6 85.9 86.9 77.1 - SimMIM RGB 81.7 83.8 85.6 86.8 78.2 80.0 MaskFeat HoG - 84.0 85.7 - 78.4 - A2MIM RGB 82.2 84.2 86.1 87.0 78.8 80.4 A2MIM+ HoG 82.4 84.5 86.3 87.1 78.9 80.5
D VISUALIZATION EXPERIMENTAL DETAILS
In addition to visualization results in Sec. 5.4, we visualize more reconstruction results of A2MIM here. Similar to Fig. 4, we ablate the proposed components in A2MIM based on ResNet-50 in Fig. A6, which demonstrates that A2MIM helps ResNet-50 learn more spatial details, i.e., more middle-order interactions. Moreover, we study the effects of the mask token in both ViTs and CNNs in Fig. A7.
Raw image
Fo x
C uc
um be
r
Masked imageMasked image
Zero mask RGB mean mask
Ba llo
on
Figure A6: Visualizations of predicted results from SimMIM (middle) and our A2MIM (right) based on ResNet-50 pre-trained 100-epochs on ImageNet-1K. Notice that T (s∗) denotes the mask token T to the optimal stage-s in ResNet-50. We ablate the proposed components by adding them to the baseline SimMIM: replacing the zero mask with the RGB mean mask (the modified SimMIM baseline) and adding the mask token T (s∗) relieve grid-like artifacts in predicted results; adding the proposed Lfreq helps the model to capture more informative details.
Raw image
G ol
df is
h
ViT-B
Ba llo
on
Masked image
ViT-B ResNet-50
Remove learned mask token
Remove learned mask token
Remove learned mask token
Figure A7: Visualizations of predicted results with and without the mask token on ImageNet-1K. Notice that mask tokens are adopted in the pre-trained models based on ViT-S (300-epoch) or ResNet-50 (100-epoch). Based on ViT-S, removing the mask token corrupts both contents of masked patches and overall colors in SimMIM while only corrupting the masked contents in A2MIM. Based on ResNet-50, removing the mask token slightly affects spatial details in the masked patches and causes grid-like artifacts in the unmasked patches. The different effects of the mask token in ViT-S and ResNet-50 might be because the two architectures use different spatial-mixing operators and normalization layers. As for ViTs, the self-attention operation captures informative details from unmasked patches, but the non-overlap patch embedding and layer normalization mask each patch isolated. The mask token learns the mean templates (contents) of masked patches and gathers spatial details from unmasked patches by the self-attention operation. As for CNNs, each patch shares the same contents extracted by batch normalization layers, and the convolution operation extract features from unmasked and masked patches equally. The mask token learns more high-frequency and informative details. | 1. What is the focus of the paper regarding MIM and its compatibility with different architectures?
2. What are the strengths and weaknesses of the proposed Architecture-Agnostic Masked Image Modeling (A2MIM) framework?
3. Do you have any concerns regarding the claims made in the paper, such as the effectiveness of the proposed method and its performance compared to other works?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
This paper studied the problem that MIM is compatible with the Transformer family but is incompatible with CNNs. To this end, it proposed an Architecture-Agnostic Masked Image Modeling framework (A2MIM) that is compatible with both Transformers and CNNs in a unified way. Specifically, this paper used RGB Mean as masked tokens and added mask tokens on the intermediate feature maps. A frequency domain reconstruction loss is also applied to improve the ability of learned models. Experiments are conducted on ImageNet-1K, downstream COCO detection and segmentation, and ADE20K dataset with ViT models and ResNet-50.
Strengths And Weaknesses
Strengths:
- MIM is an interesting problem/framework to study, and understanding how MIM-based methods perform well in the vision domain is important for this field.
- The proposed method is clear to understand and is easy to follow by other researchers.
- The experiments are extensive on ImageNet-1K, COCO, and ADE20K datasets.
Weaknesses:
Some statements are incorrect or even a little bit over-claimed, such as:
In the abstract, the authors stated: “MIM primarily works for the Transformer family but is incompatible with CNNs.” And in the introduction section, this paper stated "To the best of our knowledge, we are the first to carry out MIM on CNNs that outperforms contrastive learning counterparts." This is not true as ConvNext [1] is adequate to handle MIM learning strategy. And this paper with CNN is actually similar to it when using patchified images with CNN.
[1] Liu, Zhuang, Hanzi Mao, Chao-Yuan Wu, Christoph Feichtenhofer, Trevor Darrell, and Saining Xie. "A convnet for the 2020s." In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 11976-11986. 2022.
“Based on this fact, we propose an Architecture-Agnostic Masked Image Modeling framework (A2MIM), which is compatible with both Transformers and CNNs in a unified way.” The proposed framework is basically similar to the regular MIM approach with trivial or not significant modifications (use RGB Mean as the Masked Tokens and add mask token on the intermediate feature map). Thus, the called "a unified way" seems incorrect. The used Fourier/Frequency domain is also not related to the core framework in this paper and can straightforwardly be applied in the vanilla MIM models.
The authors stated that the proposed framework has “(a) No complex or non-generic designs are adopted to ensure compatibility with all network architectures. (b) Better middle order interactions between patches for more generalized feature extraction.” I think this is because the proposed framework is too close to the original MIM architecture.
The authors claimed they “delved deep into MIM and answered the question of what exactly is learned during MIM pre-training.” This statement is very strong, however, after reading this paper carefully, I still did not get the insights or intuition about what MIM pre-training learns from the input data from the paper’s descriptions. Proper explorations and explanations are necessary to support this claim.
The performance in this paper is not competitive. As shown in Tables 1, 2, 4, etc., the improvement is fairly marginal (0%~0.2%). Considering that this paper used extra Fourier/Frequency domain reconstruction supervision/loss, I’m not sure whether the proposed strategy is truly effective or not.
The writing and organization of this paper can be improved. The used Fourier/Frequency domain reconstruction loss seems not related to the key approach of the architecture-agnostic masked image modeling framework, since it can also be applied to the regular MIM frameworks. Moreover, the insights of using this additional supervision are not clearly expressed. This part seems fragmented from others in the method.
Clarity, Quality, Novelty And Reproducibility
This paper is clearly presented, however, the originality of this work is slightly insufficient. |
ICLR | Title
Architecture-Agnostic Masked Image Modeling -- From ViT back to CNN
Abstract
Masked image modeling (MIM), an emerging self-supervised pre-training method, has shown impressive success across numerous downstream vision tasks with Vision transformers (ViTs). Its underlying idea is simple: a portion of the input image is randomly masked out and then reconstructed via the pre-text task. However, the working principle behind MIM is not well explained, and previous studies insist that MIM primarily works for the Transformer family but is incompatible with CNNs. In this paper, we first study interactions among patches to understand what knowledge is learned and how it is acquired via the MIM task. We observe that MIM essentially teaches the model to learn better middle-order interactions among patches and extract more generalized features. Based on this fact, we propose an Architecture-Agnostic Masked Image Modeling framework (AMIM), which is compatible with both Transformers and CNNs in a unified way. Extensive experiments on popular benchmarks show that our AMIM learns better representations without explicit design and endows the backbone model with the stronger capability to transfer to various downstream tasks for both Transformers and CNNs.
1 INTRODUCTION
Supervised deep learning with large-scale annotated data has witnessed an explosion of success in computer vision (CV) (Krizhevsky et al., 2012a; He et al., 2016) and natural language processing (NLP) (Vaswani et al., 2017). However, a large number of high-quality annotations are not always available in real-world applications. Learning representations without supervision by leveraging pre-text tasks has become increasingly popular.
In CV, early self-supervised learning approaches (Zhang et al., 2016; Doersch et al., 2015; Gidaris et al., 2018) aim to capture invariant features through predicting transformations applied to the same image. However, these methods rely on vision ad-hoc heuristics, and the learned representations are less generic for downstream tasks. Recently, contrastive learning-based approaches (Tian et al., 2020; Chen et al., 2020b; He et al., 2020) have witnessed significant progress, even outperforming supervised methods on several downstream tasks (Chen et al., 2020c; Grill et al., 2020; Zbontar et al., 2021). More recently, inspired by masked autoencoding methods (Radford et al., 2018; Devlin et al., 2018) in NLP, Masked Image Modeling (MIM) methods (Bao et al., 2022; He et al., 2022; Wei et al., 2021; Xie et al., 2021b) have brought about new advances for self-supervised pre-training on CV tasks. The transition from human language understanding to NLP masked autoencoding is quite natural because the filling of missing words in a sentence requires relatively comprehensive semantic understanding. In analogy, humans can understand and imagine masked content by visually filling the missing structures in an image containing occluded parts.
Different from contrastive learning, which yields a clustering effect from pre-training by pulling similar samples and pushing away dissimilar samples, MIM pre-training methods have not been extensively explored in the context of the expected knowledge learned or how this knowledge is acquired. Existing works mainly focus on improving downstream tasks performance via explicit design such as trying different prediction target (Wei et al., 2021), adopting pre-trained tokenizer (Zhou et al., 2021), utilizing complex Transformer decoder (He et al., 2022) or combining with contrastive learning (El-Nouby et al., 2021). Moreover, the success of existing MIM methods is largely confined to Vision Transformer (ViT) structures (Dosovitskiy et al., 2021) since it leads to under-performing performance to directly apply mask token (Devlin et al., 2018) and positional embedding to CNNs.
In this work, we carry out systematic experiments and show that MIM as a pre-training task essentially teaches the model to learn better middle-order interactions between patches for more generalized feature extraction regardless of the underlying network structure. Compared to the local texture features learned by low-order interactions between patches, more complex features such as shape and edge could be extracted via middle-order interactions among patches. The interaction of patches could be considered as information fusion via both the convolution operation of a CNN and the self-attention mechanism of a Transformer. That is to say, CNN and Transformer should both benefit from better middle-order interactions with MIM as the pre-text task.
To bridge the gap of MIM in terms of network architectures based on our extensive experimental analysis, we propose an Architecture-Agnostic Masked Image Modeling framework (A2MIM) that focuses on enhancing the middle-order interaction capabilities of the network. Specifically, we mask the input image with the mean RGB value and place the mask token at intermediate feature maps of the network. In addition, we propose a loss in the Fourier domain to further enhance the middle-order interaction capability of the network. Our contributions are summarized as follows:
• We conducted systematic experiments and showed the essence of MIM is to better learn middle-order interactions between patches but not reconstruction quality.
• We proposed a novel MIM-based framework dubbed A2MIM that bridges the gap between CNNs and Transformers. We are also the first to perform MIM on CNNs without adopting designs native to ViTs that outperforms contrastive learning counterparts.
• Extensive experiments with both Transformers and CNNs on ImageNet-1K and public benchmarks for various downstream tasks show that our method achieves performance improvement on pre-trained representation quality than state-of-the-art methods.
2 RELATED WORK
Contrastive Learning. Contrastive learning learns instance-level discriminative representations by extracting invariant features over distorted views of the same data. MoCo (He et al., 2020) and SimCLR (Chen et al., 2020b) adopted different mechanisms to introduce negative samples for contrast with the positive. BYOL (Grill et al., 2020) and its variants (Chen & He, 2020; Chen et al., 2021) further eliminate the requirement of negative samples to avoid representation collapse. Besides pairwise contrasting, SwAV (Caron et al., 2020) clusters the data while enforcing consistency between multi-augmented views of the same image. Barlow Twins (Zbontar et al., 2021) proposed to measure the cross-correlation matrix of distorted views of the same image to avoid representation collapsing. Meanwhile, some efforts have been made on top of contrastive methods to improve pre-training quality for specific downstream tasks (Xie et al., 2021a; Xiao et al., 2021; Selvaraju et al., 2021; Wu et al., 2022). MoCo.V3 (Chen et al., 2021) and DINO (Caron et al., 2021) adopted ViT (Dosovitskiy et al., 2021) in self-supervised pre-training to replace CNN backbones.
Autoregressive Modeling. Autoencoders (AE) are a typical type of network architecture that allows representation learning with no annotation requirement (Hinton & Zemel, 1993). By forcing denoising property onto the learned representations, denoising autoencoders (Vincent et al., 2008; 2010) are a family of AEs that reconstruct the uncorrected input signal with a corrupted version of the signal as input. Generalizing the notion of denoising autoregressive modeling, masked predictions attracted the attention of both the NLP and CV communities. BERT (Devlin et al., 2018) performs masked language modeling (MLM) where the task is to classify the randomly masked input tokens. Representations learned by BERT as pre-training generalize well to various downstream tasks. For CV, inpainting tasks (Pathak et al., 2016) to predict large missing regions using CNN encoders and colorization tasks (Zhang et al., 2016) to reconstruct the original color of images with removed color channels are proposed to learn representation without supervision. With the introduction of Vision Transformers (ViT) (Dosovitskiy et al., 2021; Liu et al., 2021), iGPT (Chen et al., 2020a) predicts succeeding pixels given a sequence of pixels as input. MAE (He et al., 2022) and BEiT (Bao et al., 2022) randomly mask out input image patches and reconstruct the missing patches with ViTs. Compared to MAE, MaskFeat (Wei et al., 2021) and SimMIM (Xie et al., 2021b) adopt linear layers as the decoder instead of another Transformer as in MAE. MaskFeat applied HOG as the prediction target instead of the RGB value. Other research endeavors (El-Nouby et al., 2021; Zhou et al., 2021; Assran et al., 2022; Akbari et al., 2021; Sameni et al., 2022) combine the idea of contrastive learning
(CL) with MIM. SplitMask (El-Nouby et al., 2021) proposed to use half of the image pixels to predict the other half while applying InfoNCE loss (Van den Oord et al., 2018) across the corresponding latent features. MSN (Assran et al., 2022) matches the representation of an image view containing randomly masked patches and the original unmasked image. Similarly, iBOT (Zhou et al., 2021) adopts the Siamese framework to combine self-distillation with MIM. Moreover, Data2Vec (Baevski et al., 2022) proposed a framework that applies the masked prediction idea for either speech, NLP, or CV. However, most MIM works are confined to ViT architectures, recently proposed CIM (Fang et al., 2022) adopts the output of a pre-trained tokenizer as target and takes the prediction of a frozen BEiT as input to the encoder as a workaround to enable MIM on CNNs. In this work, we propose A2MIM with no components native to ViTs adopted to perform MIM with ViTs and CNNs.
3 INTRIGUING PROPERTIES OF MASKED IMAGE MODELING
3.1 IS MIM BETTER IMAGE AUGMENTATION?
Compared to CNN, Transformers gain tremendous performance improvement with carefully designed image augmentation techniques such as RandAug(Cubuk et al., 2020), CutMix(Yun et al., 2019) and random erasing(Zhong et al., 2020). Random erasing(Zhong et al., 2020) randomly removes part of the image and replaces it with Gaussian noise, while Cutmix randomly removes part of the image and replaces the corresponding region with a patch from another image. Similarly, as in most MIM pre-training tasks, some image patches are masked out and replaced with a learnable mask token. Noticing the resemblance of the masking operations, we hypothesize that MIM as a pre-training task and masking-based data augmentations enhance the network’s robustness towards occlusion, enabling the network with a more generalized feature extraction ability. To verify our hypothesis, we design an occlusion robustness test. Let x ∈ R3×H×W be an input image and y ∈ RC be its corresponding label, where C is the class number. Consider a classification task y = f(x) where f denotes a neural network, the network is considered robust if the network outputs the correct label given an occluded version of the image x′, namely y = f(x′). For occlusion, we consider the patch-based random masking as adopted in most MIM works (He et al., 2022; Xie et al., 2021b; Wei et al., 2021). In particular, we split the image of size 224× 224 into patch size 16× 16 and randomly maskM patches out of the total number ofN patches. The occlusion ratio could then be defined as MN . We conduct experiments on ImageNet-100 (IN-100) (Krizhevsky et al., 2012b) for both Transformer and CNN with different settings. We choose ViT-S (Dosovitskiy et al., 2021) and ResNet-50(He et al., 2016) as the network architecture. Robustness is compared under the following settings: (i) random weight initialization with no image augmentation applied; (ii) random weight initialization with different image augmentations applied; (iii) MIM pre-training as weight initialization with and without image augmentations applied. In Fig. 1, we report the average top-1 accuracy across five runs trained with different settings under various occlusion ratios. Fig. 1(a) and 1(b) show that both MIM and patch-removing alike augmentations significantly improve model occlusion robustness for both ViT-S and ResNet-50. Nevertheless, MIM yields more robust feature extraction than adopting augmentations. Although MIM and patch-removing alike augmentations share similar masking mechanisms, MIM explicitly forces the model to learn the interactions between patches in order to reconstruct missing patches enabling more robust feature extraction. Comparing Fig. 1(a) and 1(b), the convex trend of accuracy from ViT-S indicates better robustness than the concave trend from ResNet-50. The self-attention mechanism of ViTs is able to model the interactions between patches with high degrees of freedom compared to CNNs constrained by convolution priors. We claim that
the success of MIM on ViTs can be seen as resonance in terms of better patch interactions imposed by MIM while supported by the self-attention mechanism of ViTs.
3.2 MIDDLE-ORDER INTERACTIONS FOR GENERALIZED FEATURE EXTRACTION
Next, we show that MIM essentially enables better middle-order interactions between patches. Note that existing MIM works adopt a medium or high masking ratio (Xie et al., 2021b; He et al., 2022) (e.g., 60% or 70%, see Fig. 2) during pre-training, and in these settings, the pairwise interactions between patches are under a middle-size context measured by the order m. Early inpainting work based on CNN (Pathak et al., 2016) resembles MIM but attracts little attention due to the much inferior performance to contrastive learning methods. The inpainting task adopts the masking strategy as illustrated in Fig. 1(c), which masks a full large region instead of random small patches. Such masking mechanisms ignore patch interaction and focus only on reconstruction leading to poor, learned representation quality. To investigate whether MIM makes the model more sensitive to patch interactions of some particular orders, we resort to the tool of multi-order interactions introduced by (Deng et al., 2022; Zhang et al., 2020). Intuitively, mth-order interactions of patches refer to inference patterns (deep features) induced from m number of patches of the original image in the input space. With a small value of m (low-order interactions), the model simply learns local features such as texture. Formally, the multi-order interaction I(m)(i, j) is to measure the order of interactions between patches i and j. We define I(m)(i, j) to be the average interaction utility between patches i and j on all contexts consisting of m patches. m indicates the order of contextual complexity of the interaction. Mathematically, given an input image x with a set of n patches N = {1, . . . , n} (e.g., an image with n pixels), the multi-order interaction I(m)(i, j) is defined as:
I(m)(i, j) = ES⊆N\{i,j},|S|=m[∆f(i, j, S)], (1)
where ∆f(i, j, S) = f(S ∪ {i, j}) − f(S ∪ {i}) − f(S ∪ {j}) + f(S). f(S) indicates the score of output with patches in N \ S kept unchanged but replaced with the baseline value (Ancona et al., 2019), where the context S ⊆ N . See Appendix B.2 for details. To measure the interaction complexity of the neural network, we measure the relative interaction strength J (m) of the encoded m-th order interaction as follow:
J (m) = Ex∈ΩEi,j |I(m)(i, j|x)|
Em′Ex∈ΩEi,j |I(m ′ )(i, j|x)|
, (2)
where Ω is the set of all samples and 0 ≤ m ≥ n − 2. J (m) is the average value over all possible pairs of patches of input samples. J (m) is normalized by the average value of all interaction strengths. J (m) then indicates the distribution (area under curve sums up to one) of the order of interactions of the network. In this work, we use J (m) as the metric to evaluate and analyze interaction orders of the network with MIM pre-training. We conduct experiments on IN-100 with image size 224× 224 and use ViT-S (Dosovitskiy et al., 2021) and ResNet-50 (He et al., 2016) as the network architecture. We consider a patch of size 16× 16 as an input patch. For the computation of J (m), we adopt the sampling solution following previous works (Deng et al., 2022; Zhang et al., 2020). As can be seen from Fig. 1(c) that ViT-S with random weight initialization tends to learn simple interactions with few patches (e.g., less than 0.05n patches) while MIM pre-trained models show a stronger interaction for relative middle-order (from 0.05n to 0.5n). Similarly, as observed from 1(d), MIM pre-trained
ResNet-50 enhances the middle-order interactions from 0.1n to 0.55n compared to random initialized models. Stronger middle-order interactions form more complex features such as shape and edge compared to local texture features learned from low-order interactions (Naseer et al., 2021).
4 APPROACH
We propose a generic MIM framework following two design rules: (a) No complex or non-generic designs are adopted to ensure compatibility with all network architectures. (b) Better middleorder interactions between patches for more generalized feature extraction. Figure 3 highlights the difference between our proposed framework and existing MIM frameworks in terms of three key components: masking strategy, encoder/decoder architecture design and prediction targets.
4.1 ARCHITECTURE AGNOSTIC FRAMEWORK
Mask Where Middle-order Interactions Occur. Existing works (El-Nouby et al., 2021; He et al., 2022; Xie et al., 2021b; Wei et al., 2021) adopt the masking strategy where the input image is divided into non-overlapping patches, and a random subset of patches are masked. MAE utilizes a Transformer as a decoder and takes only the visible patches into the encoder. Masked tokens are appended to the decoder to reconstruct the masked patches. SimMIM (Xie et al., 2021b) and MaskFeat (Wei et al., 2021) utilize a fully connected layer as the decoder and feed the mask token into the encoder together with the visible patches. The mask token (Devlin et al., 2018) is a token-shared learnable parameter that indicates the presence of missing patches to be predicted. Despite different choices of decoder structures, the mask token is either placed at the input to the encoder or the decoder. Mathematically, the masking process of MIM is defined as xmask = x (1−M) + T M , where M is the random occlusion mask, and T represents the learnable mask token. Such masking at the patch embedding layer aligns with the attention mechanism of Transformers, which is robust against occlusion. However, masking at the stem layer undermines the context extraction capability of CNN, which relies on local inductive biases. Moreover, masking at input stages of the network leads to loworder interactions. Thus, we propose to mask intermediate features where the output feature contains both the semantic and spatial information and the mask token can encode interactions with the medium number of tokens. More concretely, our masking operation is defined as zlmask = z
l + T D(M), where zl is the intermediate feature map of x at layer-l in the Transformer encoder (or for stage-l in CNNs) and D(·) is the corresponding down-sampling function of the occlusion mask.
Filling Masked Tokens with RGB Mean. It is worth noting that existing works directly replace the occluded patches with the mask token in the input space or after the patch embedding (Bao et al., 2022; Xie et al., 2021b). In contrast, we use the average RGB value to fill the occluded patches as the input to the encoder and add the mask token onto the intermediate feature maps of the encoder. The masking mechanism originates from NLP where languages are of high-level semantics and do not require low-level feature extraction as image processing. The introduction of a zero mask at the early stages of the network where low-level feature extraction happens is harmful in terms of feature extraction. From the view of Fourier domain, the RGB mean is the DC component of images. It not only brings about minimum local statistics variation caused by the masking operation but also forces the network to model rather more informative medium frequencies instead of filling the mask patches with blurry color blocks of low frequencies. The proposed masking strategy is generic to both convolution and self-attention in that it accommodates low-level to semantic-level feature extraction.
4.2 MIDDLE-ORDER INTERACTIONS FROM FOURIER PERSPECTIVE
Current works (El-Nouby et al., 2021; He et al., 2022; Xie et al., 2021b) adopt raw RGB values as the prediction target. However, raw pixels in the spatial domain are heavily redundant and often contain low-order statistics (Bao et al., 2022; Wei et al., 2021; Zhou et al., 2021). MaskFeat (Wei et al., 2021) adopts the Histogram of Oriented Gradients (HOG) as the prediction target outperforming MAE and SimMIM. HOG is a discrete descriptor of medium or high-frequency features which captures shape patterns based on middle-order interactions. ViTs and CNNs have low-pass and high-pass filtering properties, respectively (Park & Kim, 2022; 2021). ViTs and CNNs have certain frequency bands that they each cannot model well, and both cannot model middle-order interactions well (detailed in Appendix B.3). The observation of the medium frequency descriptor HOG improves middle-order interactions and leads to the hypothesis that learning medium frequencies would help the model learn more middle-order interactions. Given a RGB image x ∈ R3×H×W , the discrete Fourier transform (DFT) of each channel is defined as:
F(u,v) = h=H∑ h=1 w=W∑ w=1 x(h,w)e−2πj( uh H + vw W ). (3)
In addition to the common MIM loss in the spatial domain Lspa, we propose Lfreq in Fourier domain:
Lfreq = c=3∑ c=1 u=H∑ u=1 w=W∑ w=1 ω(u, v) ∥∥DFT(xpredc M + de(xpredc ) (1−M))−DFT(xc)∥∥ , (4)
where xpred is the predicted image, de(·) is detach gradient operation, and ω(u, v) is the frequency weighting matrix. ω(u, v) enables both ViTs and CNNs to model features of medium frequencies rather than local textures and noise corresponding to high frequencies. Inspired by Focal Frequency loss (Jiang et al., 2021), we define adaptive ω(u, v) as follows:
ω(u, v) = ∥∥DFT(xpredc M + det(xpredc ) (1−M))−DFT(xc)∥∥α , (5)
where α is a scaling factor, and we set α = 1. Fig. B.3 verifies that Eq. (5) allows the model to learn previously ignored frequencies (mostly the medium frequency components). Note that Lfreq introduces negligible overhead by using Fast Fourier Transform (FFT) algorithms with O(n log n) complexity to achieve DFT. The overall loss function of A2MIM is then defined as:
L = Lspa + λLfreq, (6) where Lspa = ∥∥xpred − x∥∥ M and λ is a loss weighting parameter. We set λ to 0.5 by default.
5 EXPERIMENTS
5.1 PRE-TRAINING SETUP
We adopt ResNet-50 (He et al., 2016) and Vision Transformer (Dosovitskiy et al., 2021) (ViTS/16 and ViT-B/16) as the backbone. We pre-train on ImageNet-1K (IN-1K) training set with AdamW (Loshchilov & Hutter, 2019) optimizer with a basic learning rate of 1.5× 10−4 adjusted by
a cosine learning rate scheduler and a batch size of 2048. The input image size is 224× 224 with a patch size of 32× 32. We use a random masking ratio of 60%. By default, the learnable mask tokens are placed at stage-3 in ResNet-50 and layer-5/layer-8 in ViT-S/ViT-B, respectively. We adopt a linear prediction head as the decoder (Xie et al., 2021b). A2MIM+ indicates adopting HOG as supervision and using the MLP decoder with depth-wise (DW) convolution. Our experiments are implemented on OpenMixup (Li et al., 2022) by Pytorch and conducted on workstations with NVIDIA V100 GPUs. We report the average results of 3 trials for all experiments and use bold and underline to indicate the best and the second-best performance. See Appendix A for detailed pre-training settings.
5.2 IMAGE CLASSIFICATION ON IMAGENET-1K
Evaluation Protocols. We first evaluate the learned representation by end-to-end fine-tuning (FT) and linear probing (Lin.) protocols on IN-1K. For evaluation on CNN, we adopt RSB A2/A3 (Wightman et al., 2021) training settings for fine-tuning on ResNet-50, which employs LAMB (You et al., 2020) optimizer with a cosine scheduler for 300/100 epochs. For the linear probing setting on ResNet-50, we freeze the backbone features and train a linear classifier with an initial learning rate of 30 and batch size of 256 following MoCo (He et al., 2020). For evaluation on Transformer, we employ the fine-tuning as MAE (He et al., 2022), which uses DeiT (Touvron et al., 2021) augmentation setting, an AdamW optimizer for 100-epoch training, and adopt a layer-wise learning rate decay of 0.65 following (Bao et al., 2022). See Appendix A for detailed evaluation configurations.
ResNet-50. We compare the proposed A2MIM with classical self-supervised learning methods (Inpainting (Pathak et al., 2016), Relative-Loc (Doersch et al., 2015), and Rotation (Gidaris et al., 2018)), contrastive learning (CL), and MIM methods with 100/300 pre-training epochs. We modified MIM methods to run them on ResNet-50: the learnable mask token is employed to the encoder of BEiT (Bao et al., 2022), Data2Vec (Baevski et al., 2022), and SimMIM (Xie et al., 2021b) after the
Table 3: Performance of object detection and semantic segmentation tasks based on ResNet50 on COCO and ADE20K.
Method Epochs COCO ADE-20K APbox APmask mIoU PyTorch (Sup.) 120 38.2 33.3 36.1 SimCLR 800 37.9 33.3 37.6 MoCoV2 400 39.2 34.3 37.5 BYOL 400 38.9 34.2 37.2 SwAV 800 38.4 33.8 37.3 SimSiam 400 39.2 34.4 37.2 Balow Twins 800 39.2 34.3 37.3 SimMIM‡ 300 39.1 34.2 37.4 CIM 300 - - 38.0 A2MIM 300 39.8 34.9 38.3
Table 4: Performance of object detection and semantic segmentation tasks based on ViT-B on COCO and ADE-20K.
Method Supervision Epochs COCO ADE-20K APbox APmask mIoU DeiT (Sup.) Label 300 47.9 42.9 47.0 MoCoV3 CL 300 47.9 42.7 47.3 DINO CL 400 46.8 41.5 47.2 BEiT DALLE 300 43.1 38.2 47.1 iBOT Momentum 400 48.4 42.7 48.0 MAE RGB 1600 48.5 42.8 48.1 MaskFeat HoG 800 49.2 43.2 48.8 SimMIM RGB 800 48.9 43.0 48.4 CAE DALLE 800 49.2 43.3 48.8 A2MIM RGB 800 49.4 43.5 49.0
stem (the output feature of 56× 56 resolutions); the encoder of MAE randomly selects 25% from 56× 56 output features of the stem as unmasked patches and takes the reorganized 28× 28 patches as the input of four stages. As shown in Tab. 1, our approach achieves competitive performance with state-of-the-art contrastive-based methods under 100-epoch RSB A3 fine-tuning. Note that MIM methods see fewer training samples per epoch than CL methods (40% vs. 200% of patches) and usually require longer pre-training epochs. Based on a longer fine-tuning evaluation using RSB A2, our method (300-epoch) outperforms contrastive-based methods with even fewer training epochs. Meanwhile, A2MIM also improves the baseline SimMIM† (+0.8%) and the concurrent work CIM (+0.4%) in terms of RSB A3 fine-tuning for the longer pre-training. Besides, we also report the linear probing accuracy in the fast pre-training for reference, although our main focus is to learn representations with better fine-tuning performances. The linear probing performance of our method is lower than contrastive-based methods, it still improves the baseline by 0.6%.
ViT. We then evaluate A2MIM based on ViT-S/B in Tab. 2. We list the supervision target used by various pre-training methods in the second column of Tab. 2. DALL-E (Ramesh et al., 2021) and VQGAN (Esser et al., 2021) are pre-trained image tokenizers, while momentum refers to the momentum encoder. Our approach outperforms current state-of-the-art methods with complex supervision, e.g., SplitMask (MIM with CL combined), iBOT (complex teacher-student architecture), and CIM (pre-trained BEiT as supervision). Based on ViT-S/B, A2MIM improves the baseline SimMIM by 0.5%/0.4% with RGB as supervision and 0.7%/0.7% with the HOG feature as supervision.
5.3 TRANSFER LEARNING EXPERIMENTS
Object detection and segmentation on COCO. To verify the transferring abilities, we benchmark CL and MIM methods on object detection and segmentation with COCO (Lin et al., 2014). For evaluation on CNN, we follow the setup in MoCo, which fine-tunes Mask R-CNN (He et al., 2017) with ResNet-50-C4 backbone using 2× schedule on the COCO train2017 and evaluates on the COCO val2017. Results in Tab. 3 indicate that our approach (300-epoch) outperforms contrastivebased methods with longer pre-training (+0.7% APbox and +0.6% APmask). For evaluation on Transformer, we follow MAE and CAE, which efficiently fine-tunes Mask R-CNN with ViT-B backbone using 1× schedule. In Tab. 4, our approach (800-epoch) is superior to popular contrastivebased and MIM methods, e.g., outperforms MAE (1600-epoch) by 0.9% APbox and 0.8% APmask.
Semantic segmentation on ADE20K. We then evaluate the transferring performances on semantic segmentation with ADE20K (Zhou et al., 2019) by fine-tuning UperNet (Xiao et al., 2018). Based on ResNet-50, all CNN models are fine-tuned for 160K iterations with SGD following MoCo. Results in Tab. 3 show that our method outperforms CL methods by at least 0.9% mIoU and outperforms CIM (required extra pre-trained BEiT (Bao et al., 2022)) by 0.3% mIoU. Based on ViT-B, we fine-tune models for 80K iterations with AdamW following MAE. Tab. 4 shows that our approach consistently improves MIM methods (e.g., outperforms MAE and SimMIM by 0.9% and 0.6% mIoU).
5.4 ABLATION STUDY
We next verify the effectiveness of the proposed components. Ablation studies are conducted with ResNet-50 and ViTs on IN-100 and IN-1K using the fine-tuning protocol. Based on the modified baseline SimMIM (Lspa), we first compare different mask token mechanisms: Replacing denotes the original way in most MIM methods, and Addition denotes our proposed way that adds the mask token to intermediate feature maps of the backbone. As shown in Fig. 5, adding the mask token to the medium stages (stage-3) or layers (layer-5) yields the best performance. Replacing masked patches in input images by RGB mean value slightly improves the baseline SimMIM, especially for ResNet-50 (88.19 vs. 87.75 on IN-100). Then, we verify the proposed Lfreq in Tab. 5. We find that simply using Lfreq without the adaptive re-weighting ω (Eqn. 5) brings limited improvements as the frequency constraint to Lspa, while employing ω further enhances performances by helping the model to learn more informative frequency components. Additionally, we visualize reconstruction results in Fig. 4 to show the improvements brought by our proposed components (more results in Appendix B).
5.5 VERIFICATION OF A2MIM DESIGN RULES Table 6: Analysis of the scalability A2MIM for advanced components on IN-1K.
Module ResNet-50 ViT-B Linear 78.8 82.4
2-layer MLP 78.8 82.4 Decoder 2-layer MLP (w/ DW) 78.9 82.5
2-layer Transformer 78.6 82.3 2-layer Transformer (w/ DW) 78.8 82.6
RGB 78.8 82.4 Target HoG Feature 78.9 82.6
DINO Feature 78.9 82.7 We verify whether A2MIM meets the intended design rules using the same experiment settings as Sec. 5.4: (i) A2MIM is generic to incorporate advanced components proposed in previous works (e.g., complex decoders, advanced prediction targets). As for the decoder structure, we replace the original linear decoder with 2-layer MLP or Transformer decoders, but find limited improvements or degenerated performances (similar to SimMIM) in Tab. 6. Inspired by PVT.V2 (Wang et al., 2022), we introduce a depth-wise (DW) convolution layer (w/ DW) to the MLP decoder (adding a 5×5 DW layer in between) and the Transformer decoder (adding a 3× 3 DW layer in each FFN (Wang et al., 2022)), which brings improvements compared to the linear decoder. As for the prediction target, we follow MaskFeat to change the RGB target to the HoG feature or the output feature from ViT-B/16 pre-trained 1600-epoch by DINO (Caron et al., 2021). Tab. 6 shows that using advanced targets significantly improves the performance of A2MIM for both ResNet-50 and ViT-B. Therefore, we can conclude A2MIM is a generally applicable framework. (ii) A2MIM enhances occlusion robustness and middle-order interaction among patches from experiments on ImageNet-1K in Fig. A3.
6 CONCLUSION
In this paper, we delved deep into MIM and answered the question of what exactly is learned during MIM pre-training. We adopted multi-order interactions to study the interaction order among image patches. We discovered that MIM essentially teaches the network to learn middle-order interactions among image patches for more complex feature extraction regardless of the network architecture. Based on our findings, we further proposed a general framework A2MIM that is compatible with both Transformers and CNNs for MIM tasks aiming at enhancing patch interactions during self-supervised pre-training. Besides a different mask token mechanism, we proposed a loss in the Fourier domain to better learn the middle-order interaction. Experimental results have shown that our proposed framework improves the representations learned for both CNNs and Transformers yielding superior performance than state-of-the-arts on various downstream tasks.
A DETAILS OF COMPARISON EXPERIMENTS
This section provides experimental details for Sec. 5, e.g., pre-training and evaluation on ImageNet-1K and transfer learning settings on downstream tasks.
A.1 IMAGENET-1K EXPERIMENTS
Pre-training. The default settings of A2MIM for ResNet-50 and ViTs are provided in Tab. A1, following SimMIM (Xie et al., 2021b). We use AdamW (Loshchilov & Hutter, 2019) optimizer with the cosine scheduler and the linear learning rate scaling rule (Goyal et al., 2020): lr = base lr×batchsize / 256. Similar to current MIM methods, we only use RandomResizedCrop with the scale of (0.67, 1.0) and do not employ other complex augmentations (e.g., Rand Augment (Cubuk et al., 2020), mixups (Yun et al., 2019), or stochastic depth) during pre-training. As for ViTs, we adopt Cosine decay for 100 and 300 epochs pre-training while using Step decay (the learning rate multiplied 0.1 at 700-epoch) for 800-epoch pre-training.
End-to-end fine-tuning. Our fine-tuning settings follow common practices of supervised image classification on ImageNet-1K. As shown in Tab. A2, we fine-tune pre-trained ViTs for 100 epochs using the DeiT (Touvron et al., 2021) training recipe, which employs AdamW (Loshchilov & Hutter, 2019) optimizer with the cross-entropy (CE) loss; we fine-tune pre-trained ResNet-50 for 100/300 epochs using RSB A3/A2 (Wightman et al., 2021) settings, which employs LAMB (You et al., 2020) optimizer with the binary cross-entropy (BCE) loss. Additionally, we use layer-wise learning rate decay as (Bao et al., 2022) for fine-tuning ViT models.
Table A1: ImageNet-1K A2MIM pre-training settings for ResNet-50 and ViT models.
Configuration ResNet-50 ViTs Pre-training resolution 224× 224 224× 224 Mask patch size 32× 32 32× 32 Mask ratio 60% 60% Optimizer AdamW AdamW Base learning rate 1.5× 10−4 1× 10−4 Weight decay 0.05 0.05 Optimizer momentum β1, β2=0.9, 0.999 β1, β2=0.9, 0.999 Batch size 2048 2048 Learning rate schedule Cosine Cosine / Step Warmup epochs 10 10 RandomResizedCrop 3 3 Rand Augment 7 7 Stochastic Depth 7 7 Gradient Clipping 7 max norm= 5
Table A2: ImageNet-1K fine-tuning recipes for ResNet-50 (RSB A2/A3) and ViTs (DeiT).
Configuration ViTs ResNet-50 DeiT RSB A2 RSB A3 FT epochs 100 300 100 Training resolution 224 224 160 Testing resolution 224 224 224 Testing crop ratio 0.875 0.95 0.95 Optimizer AdamW LAMB LAMB Base learning rate 2.5× 10−4 1.5× 10−3 1× 10−3 Weight decay 0.05 0.02 0.02 Batch size 1024 2048 2048 Learning rate schedule Cosine Cosine Cosine Warmup epochs 5 5 5 Label smoothing 0.1 7 7 Stochastic depth 0.1 0.05 7 Gradient clipping 5.0 7 7 Rand Augment (9, 0.5) (7, 0.5) (6, 0.5) Mixup alpha 0.8 0.1 0.1 CutMix alpha 1.0 1.0 1.0 Loss function CE loss BCE loss BCE loss
A.2 OBJECT DETECTION AND SEGMENTATION ON COCO
We adopt Mask-RCNN (He et al., 2017) framework to perform transfer learning to object detection and segmentation on COCO (Lin et al., 2014) in Detectron21. For evaluation on ResNet-50, we follow MoCo (He et al., 2020) and fine-tune Mask R-CNN with the pre-trained ResNet-50-C4 backbone using 2× schedule (24 epochs). For evaluation of ViTs, we follow MAE (He et al., 2022), which employs the pre-trained ViT backbone and an FPN neck (Lin et al., 2017) in Mask R-CNN, and fine-tune the model using 1× schedule (12 epochs). For a fair comparison, we follow (Bao et al., 2022; Xie et al., 2021b) to turn on relative position bias in ViT (Dosovitskiy et al., 2021) during both pre-training and transfer learning, initialized as zero.
A.3 SEMANTIC SEGMENTATION ON ADE-20K
We adopt UperNet (Xiao et al., 2018) to perform transfer learning to semantic segmentation on ADE-20K and use the semantic segmentation implementation in MMSegmentation2. We initialize
1https://github.com/facebookresearch/detectron2 2https://github.com/open-mmlab/mmsegmentation
0 10 20 30 40 50 60 70 80 90 100
Occlusion ratio (%)
0
20
40
60
80
To p
1 Ac
cu ra
cy (%
)
ViT S Random PatchDrop
BYOL + FT (DeiT) MoCoV3 + FT (DeiT) MoCoV3 + FT (Vanilla) MAE + FT (DeiT) SimMIM + FT (DeiT) SimMIM + FT (Vanilla)
(a)
0 10 20 30 40 50 60 70 80 90 100
Occlusion ratio (%)
0
20
40
60
80
To p
1 Ac
cu ra
cy (%
)
ResNet50 Random PatchDrop BYOL + FT (DeiT) BYOL + FT (Vanilla) MoCoV3 + FT (DeiT) SimMIM + FT (DeiT) SimMIM + FT (Vanilla)
(b)
0 0.1 0.3 0.5 0.7 0.9 1.0 Order / n
0
1
2
3
4
5
6
In ter
ac tio
n St
re ng
th ViT S Interaction Strength of Order BYOL + FT (Vanilla) MoCoV3 + FT (Vanilla) MAE + FT (Vanilla) SimMIM + FT (Vanilla)
(c)
0 0.1 0.3 0.5 0.7 0.9 1.0 Order / n
0.0
0.5
1.0
1.5
2.0
2.5
3.0
3.5
In ter
ac tio
n St
re ng
th
ResNet50 Interaction Strength of Order BYOL + FT (Vanilla) MoCoV3 + FT (Vanilla) SimMIM + FT (Vanilla)
(d)
Figure A1: (a)(b): Robustness against different occlusion ratios of images (CL vs. MIM) is studied for both ViT-S and ResNet-50 on ImageNet-100. (c)(d): Distributions of the interaction strength J (m) (CL vs. MIM) are explored for both ViT-S and ResNet-50 on ImageNet-100. The label indicates the pre-training method + fine-tuning augmentation used, random stands for random weight initialization.
the UperNet using the pre-trained backbones (ResNet-50 or ViTs) on ImageNet-1K. For ViTs, we fine-tune end-to-end for 80K iterations by AdamW with a batch size of 16. We search a optimal layerwise decay from {0.8, 0.9} and search optimal a learning rate from {1× 10−4, 2× 10−4, 3× 10−4} for all competitors. Similar to fine-tuning settings on COCO, we use relative position bias in ViT (Dosovitskiy et al., 2021) during both pre-training and transfer learning as (Bao et al., 2022; Xie et al., 2021b). For ResNet-50, we follow MoCo (He et al., 2020), i.e., all CNN models are fine-tuned for 160K iterations by SGD with the momentum of 0.9 and a batch size of 16.
B EMPIRICAL EXPERIMENTS
This section provides background information and experimental details for Sec. 3. We also provide additional results of occlusion robustness evaluation and multi-order interaction strength.
B.1 OCCLUSION ROBUSTNESS
In Sec. 3.1, we analyze robustness against occlusion of fine-tuned models on ImageNet-100 (a subset on ImageNet-1K divided by (Tian et al., 2020)) using the official implementation3 provided by Naseer et al. (2021). Both MIM and contrastive-based methods are pre-trained 400 epochs on ImageNet-100 using their pre-training settings on ImageNet-1K. We adopt the fine-tuning training recipe as DeiT in Tab. A2 and use the same setting (100-epoch) for both ViT-S and ResNet-50. Note that we use the modified SimMIM for ResNet-50 (replacing masked patches in the input image with the RGB mean) in all experiments.
As shown in Fig. 1 and A1, we compared MIM pre-trained models supervised methods with various augmentations and contrastive learning pre-trained methods in terms of the top-1 accuracy under various occlusion ratios. We find that MIM methods show better occlusion robustness on both Transformers and CNNs. In addition to Sec. 3.1, we also provide results of salient occlusion for ViT-S and ResNet-50 on ImageNet-100 in Fig. A2. Note that the occlusion ratio means the ratio of dropped and total patches and we plot the mean of accuracy across 3 runs. We can conclude that MIM pre-trained models have stronger robustness against random and salient occlusions than supervised and contrastive-based methods.
B.2 MULTI-ORDER INTERACTION
In Sec. 3.2, we interpret what is learned by MIM by multi-order interaction (Deng et al., 2022; Zhang et al., 2020). The interaction complexity can be represented by I(m)(i, j) (defined in Eqn. 1), which measures the average interaction utility between variables i, j on all contexts consisting ofm variables. Notice that the order m reflects the contextual complexity of the interaction I(m)(i, j). For example, a low-order interaction (e.g., m = 0.05n) means the relatively simple collaboration between variables i, j, while a high-order interaction (e.g., m = 0.05n) corresponds to the complex collaboration. As figured out in the representation bottleneck (Deng et al., 2022), deep neural networks (DNNs) are more likely to encode both low-order interactions and high-order interactions, but often fail to learn middle-order interactions. We hypothesize that MIM helps models learn more middle-order
3https://github.com/Muzammal-Naseer/Intriguing-Properties-of-Vision-Tra nsformers
0 10 20 30 40 50 60 70 80 90 100
Occlusion ratio (%)
0
20
40
60
80
To p
1 Ac
cu ra
cy (%
)
ViT S Random PatchDrop
BYOL + FT (DeiT) MoCoV3 + FT (DeiT) MAE + FT (DeiT) SimMIM + FT (DeiT) Random + FT (DeiT) Random + FT (Vanilla)
(a)
0 10 20 30 40 50 60 70 80 90 100
Occlusion ratio (%)
0
20
40
60
80
To p
1 Ac
cu ra
cy (%
)
ViT S Salient PatchDrop BYOL + FT (DeiT) MoCoV3 + FT (DeiT) MAE + FT (DeiT) SimMIM + FT (DeiT) Random + FT (DeiT) Random + FT (Vanilla)
(b)
0 10 20 30 40 50 60 70 80 90 100
Occlusion ratio (%)
0
20
40
60
80
To p
1 Ac
cu ra
cy (%
)
ResNet50 Random PatchDrop BYOL + FT (DeiT) MoCoV3 + FT (DeiT) SimMIM + FT (DeiT) Random + FT (DeiT) Random + FT (Vanilla)
(c)
0 10 20 30 40 50 60 70 80 90 100
Occlusion ratio (%)
0
20
40
60
80
To p
1 Ac
cu ra
cy (%
)
ResNet50 Salient PatchDrop BYOL + FT (DeiT) MoCoV3 + FT (DeiT) SimMIM + FT (DeiT) Random + FT (DeiT) Random + FT (Vanilla)
(d) Figure A2: Robustness against various random or salient occlusion ratios of images is studied in (a)(b) for ViT-S, and (c)(d) for ResNet-50 using various experimental settings on ImageNet-100. The label indicates the pre-training method + fine-tuning setting used, random stands for random weight initialization.
0 10 20 30 40 50 60 70 80 90 100
Occlusion ratio (%)
0
20
40
60
80
To p
1 Ac
cu ra
cy (%
)
ViT S Random PatchDrop
MoCoV3 + FT (DeiT) A2MIM + FT (DeiT) SimMIM + FT (DeiT) PyTorch + FT (DeiT)
(a)
0 10 20 30 40 50 60 70 80 90 100
Occlusion ratio (%)
0
20
40
60
80
To p
1 Ac
cu ra
cy (%
)
ResNet50 Random PatchDrop BYOL + FT (A3) MoCoV3 + FT (A3) A2MIM + FT (A3) SimMIM + FT (A3) PyTorch + FT (A3)
(b)
0 0.1 0.3 0.5 0.7 0.9 1.0 Order / n
0.0
0.5
1.0
1.5
2.0
2.5
3.0
3.5
In ter
ac tio
n St
re ng
th ViT S Interaction Strength of Order MoCoV3 + FT (DeiT) A2MIM + FT (DeiT) SimMIM + FT (DeiT) PyTorch + FT (DeiT)
(c)
0 0.1 0.3 0.5 0.7 0.9 1.0 Order / n
0.0
0.5
1.0
1.5
2.0
2.5
3.0
3.5
4.0
In ter
ac tio
n St
re ng
th ResNet50 Interaction Strength of Order BYOL + FT (A3) MoCoV3 + FT (A3) A2MIM + FT (A3) SimMIM + FT (A3) PyTorch + FT (A3)
(d) Figure A3: Verification of robustness and interaction of A2MIM with ViT-S and ResNet-50 on ImageNet-1K. (a)(b): Robustness against different occlusion ratios of images is studied for A2MIM and various methods. (c)(d): Distributions of the interaction strength J (m) are explored.
interactions since MIM has a natural advantage in cases where some parts of the image are masked out. In Fig. 1, we calculate the interaction strength J (m) (defined in Eqn. 2) for fine-tuned models on ImageNet-100 using the official implementation4 provided by Deng et al. (2022). Specially, we use the image of 224× 224 resolution as the input and calculate J (m) on 14× 14 grids, i.e., n = 14× 14. And we set the model output as f(xS) = log
P (ŷ=y|xS) 1−P (ŷ=y|xS) given the masked sample xS , where y
denotes the ground-truth label and P (ŷ = y|xS) denotes the probability of classifying the masked sample xS to the true category.
B.3 MIM FROM FREQUENCY PERSPECTIVE
We first plot the log magnitude of Fourier transformed feature maps of ResNet-50 with different pretraining methods using the tools5 provided by Park & Kim (2022) on ImageNet-1K. Following (Park & Kim, 2022), we first convert feature maps into the frequency domain and represent them on the normalized frequency domain (the highest frequency components are at {−π,+π}). In Fig. 4(a), we report the amplitude ratio of high-frequency components by using ∆ log amplitude. As shown in Fig. 4(a), inpainting and MIM show similar low-pass filtering effects at convolution layers as compared to contrastive learning. This indicates that inpainting and MIM reduce noise and uncertainty induced by high-frequency features. We argue that the reconstruction performance of MIM is mainly related to low or high-order interactions of patches (Deng et al., 2022), while reconstruction performance is not directly related to the learned representation quality. Then, we provide the standard deviation of feature maps by block depth as (Park & Kim, 2022; 2021), which first calculates the feature map variance on the last two dimensions and then averages over the channel dimension for the whole dataset. Fig. 4(b) shows the feature variance of each layer of ResNet-50 with different pre-training methods on IN-1K. This figure indicates that MIM tends to reduce the feature map variance, and conversely, supervised training, inpainting, and contrastive learning based on CNN tend to increase variance. Compared to MIM, which learns better middle-order interactions, the inpainting task fails to filter out low-order interactions and thus leads to higher variance. To conclude, MIM methods learn middle-order interactions and reduce the feature map uncertainty (high frequencies) based on the CNN encoder for a generalized and stabilized feature extraction.
4https://github.com/Nebularaid2000/bottleneck 5https://github.com/xxxnell/how-do-vits-work
0.0 0.2 0.4 0.6 0.8 1.0 Normalized Depth
-7.0
-6.0
-5.0
-4.0
-3.0
-2.0
-1.0
Lo g
Am pl
itu de
BYOL MoCoV3 Inpainting SimMIM DeiT (Sup. ) Random
(a)
0.0 0.2 0.4 0.6 0.8 1.0 Normalized Depth
0.00
1.00
2.00
3.00
4.00
Fe at
ur e
M ap
Va ria
nc e
BYOL MoCoV3 Inpainting SimMIM DeiT (Sup. ) Random
(b) Figure A4: (a) Fourier transformed feature maps. The vertical axis is the relative log amplitudes of the high-frequency components, and the horizontal axis is the normalized depth of the network. The blue columns indicate the pooling layers, while the white columns indicate the convolution layers. (b) Feature maps variance. The vertical axis is the average variance value of feature maps. DeiT (Sup.) is supervised pre-training. The results of the randomly initialized network are plotted for reference.
Fo x
Raw image Prediction image
Fourier spectrum Fourier spectrum
w/o
Loss weight (Fourier spectrum) Figure A5: Visualization of predicted images and Lfreq loss weight in Fourier domain. From the view of the Fourier spectrum, the raw image (left) contains 99% low-frequency components (usually present contents) and rich medium-frequency (structural patterns) and high-frequency components (local details and noises), while the predicted result (middle) provides fewer medium or highfrequency components. Calculated in the Fourier domain, the loss weights (right) of Lfreq w/o w help the model to learn the full spectrum while Lfreq focusing on the low and medium-frequency parts, which are more likely to be low-order or middle-order interactions.
C MORE EXPERIMENT RESULTS
C.1 ABLATION OF THE PROPOSED MODULES
In addition to ablation studies in Sec. 5.4, we provide ablation study on the proposed Lfreq in the Fourier domain, as shown in Figure A5. As we discussed in Sec. 4, we hypothesize that learning medium frequencies would help better learn middle-order interactions. we thereby propose Lfreq to tackle the dilemma of Lspa, which tends to learn low-frequency components (i.e., contents reflected by high-order interactions). Although the reconstruction loss in the Fourier domain has a global perception, the high-frequency components are usually constructed by local details and noises (i.e., low-order interactions), which might hurt the generalization abilities. Therefore, we introduce the reweight w(u, v) to force the model to learn more medium-frequency components, which are identical to middle-order interactions. Then, we perform further analysis of the masked patch size for A2MIM in Tab. A3. Note that we pre-train ResNet-50 for 100 epochs and ViT-B for 400 epochs on ImageNet-1K and report the fine-tuning results. As shown in Tab. A3, when the mask ratio is 60%, the optimal masked patch size is 32× 32 for A2MIM, which is the same as SimMIM.
Table A3: Ablation of masked patch size for A2MIM based on ResNet-50 and ViT-B on ImageNet-1K. Model Masked Mask PT Top-1 Accuracy (%)
patch size ratio epoch ResNet-50 8 / 16 / 32 / 64 0.6 100 78.2 / 78.6 / 78.8 / 78.7
ViT-B 8 / 16 / 32 / 64 0.6 400 82.9 / 83.4 / 83.5 / 83.3
C.2 ANALYSIS OCCLUSION ROBUSTNESS AND INTERACTION OF A2MIM
We further analyze occlusion robustness and interaction strength of A2MIM with ViT-S (pre-training 400-epoch) and ResNet-50 (pre-training 100-epoch) on ImageNet-1K, as shown in Fig. A3. Fig. 3(a) and 3(b) shows that A2MIM is more robust to occlusion than the baseline SimMIM and contrastive learning methods with both Transformers and CNNs. Meanwhile, we find that MIM methods learn more balanced interaction strength than both supervised and contrastive learning methods in Fig. 3(c) and 3(d). A2MIM further improves SimMIM by capturing more middle-order interactions (0.2n to 0.6n) with Transformers and CNNs. Therefore, we can conclude that A2MIM helps the model to learn better middle-order interactions between patches for more generalized visual representation.
C.3 SCALING-UP A2MIM
Additionally, we scale up the model size of backbone encoders to verify the performance of A2MIM with ResNet and ViT on ImageNet-1K. As shown in Table A4, our proposed A2MIM and its advanced variant A2MIM+ consistently improve both the contrastive-based and MIM methods on all scale architectures, e.g., A2MIM outperforms SimMIM by 0.5%/0.5%/0.5%/0.2% and 0.6%/0.4% based on ViT-S/B/L/H and ResNet-50/101, demonstrating that A2MIM is an architecture-agnostic and salable framework for MIM pre-training.
Table A4: ImageNet-1K fine-tuning (FT) top-1 accuracy (%) with ResNet (R) and ViT of various model scales. We adopt the 100-epoch fine-tuning protocols for both architectures.
Methods Supervision ViT-S ViT-B ViT-L ViT-H R-50 R-101 Sup. Label 79.9 81.8 82.6 83.1 78.1 79.8 MoCoV3 CL 81.4 83.2 84.1 - 78.7 - DINO CL 81.5 83.6 - - 78.7 - MAE RGB - 83.6 85.9 86.9 77.1 - SimMIM RGB 81.7 83.8 85.6 86.8 78.2 80.0 MaskFeat HoG - 84.0 85.7 - 78.4 - A2MIM RGB 82.2 84.2 86.1 87.0 78.8 80.4 A2MIM+ HoG 82.4 84.5 86.3 87.1 78.9 80.5
D VISUALIZATION EXPERIMENTAL DETAILS
In addition to visualization results in Sec. 5.4, we visualize more reconstruction results of A2MIM here. Similar to Fig. 4, we ablate the proposed components in A2MIM based on ResNet-50 in Fig. A6, which demonstrates that A2MIM helps ResNet-50 learn more spatial details, i.e., more middle-order interactions. Moreover, we study the effects of the mask token in both ViTs and CNNs in Fig. A7.
Raw image
Fo x
C uc
um be
r
Masked imageMasked image
Zero mask RGB mean mask
Ba llo
on
Figure A6: Visualizations of predicted results from SimMIM (middle) and our A2MIM (right) based on ResNet-50 pre-trained 100-epochs on ImageNet-1K. Notice that T (s∗) denotes the mask token T to the optimal stage-s in ResNet-50. We ablate the proposed components by adding them to the baseline SimMIM: replacing the zero mask with the RGB mean mask (the modified SimMIM baseline) and adding the mask token T (s∗) relieve grid-like artifacts in predicted results; adding the proposed Lfreq helps the model to capture more informative details.
Raw image
G ol
df is
h
ViT-B
Ba llo
on
Masked image
ViT-B ResNet-50
Remove learned mask token
Remove learned mask token
Remove learned mask token
Figure A7: Visualizations of predicted results with and without the mask token on ImageNet-1K. Notice that mask tokens are adopted in the pre-trained models based on ViT-S (300-epoch) or ResNet-50 (100-epoch). Based on ViT-S, removing the mask token corrupts both contents of masked patches and overall colors in SimMIM while only corrupting the masked contents in A2MIM. Based on ResNet-50, removing the mask token slightly affects spatial details in the masked patches and causes grid-like artifacts in the unmasked patches. The different effects of the mask token in ViT-S and ResNet-50 might be because the two architectures use different spatial-mixing operators and normalization layers. As for ViTs, the self-attention operation captures informative details from unmasked patches, but the non-overlap patch embedding and layer normalization mask each patch isolated. The mask token learns the mean templates (contents) of masked patches and gathers spatial details from unmasked patches by the self-attention operation. As for CNNs, each patch shares the same contents extracted by batch normalization layers, and the convolution operation extract features from unmasked and masked patches equally. The mask token learns more high-frequency and informative details. | 1. What is the focus of the paper regarding MIM pre-training approaches?
2. What are the strengths of the proposed method, particularly in understanding middle-order interactions?
3. What are the weaknesses of the paper, especially regarding the connection between sections and scalability?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
This paper inspects MIM pre-training approaches and finds MIM essentially helps the model to learn better middle-order interactions between patches. Motivated by that, this paper proposes a novel MIM-based method, dubbed A
2
MIM, that works well for both (small-sized) ConvNets & ViTs. Extensive experiments are conducted to verify the effectiveness of the proposed approach.
Strengths And Weaknesses
Strength
This paper provides a thoroughgoing study of MIM, and highlights the essence of MIM pre-training is to help models learn middle-order interactions between patches.
Based on the study of middle-order interactions of MIM, this paper proposes a novel approach that works well for both ConvNets as well as ViTs. What's more, I appreciate the management of technical details in this paper, such as ``Filling Masked Tokens with RGB Mean'' is technically sound.
Extensive experiments on several downstream visual recognition tasks are conducted to verify the effectiveness of the proposed approach.
Weaknesses
The 2nd paragraph's name of Sec. 2 Related Work is inaccurate.``Autoregressive Modeling'' means that the output depends linearly on its own previous values (e.g, GPTs), while masked modeling is bidirectional modeling.
I don't see a strong connection between Sec. 3.1 and the proposed approach.
The scalability of A
2
MIM is unknown, which is a crucial property for pre-training.
Clarity, Quality, Novelty And Reproducibility
The quality, clarity and originality are sound. |
ICLR | Title
Architecture-Agnostic Masked Image Modeling -- From ViT back to CNN
Abstract
Masked image modeling (MIM), an emerging self-supervised pre-training method, has shown impressive success across numerous downstream vision tasks with Vision transformers (ViTs). Its underlying idea is simple: a portion of the input image is randomly masked out and then reconstructed via the pre-text task. However, the working principle behind MIM is not well explained, and previous studies insist that MIM primarily works for the Transformer family but is incompatible with CNNs. In this paper, we first study interactions among patches to understand what knowledge is learned and how it is acquired via the MIM task. We observe that MIM essentially teaches the model to learn better middle-order interactions among patches and extract more generalized features. Based on this fact, we propose an Architecture-Agnostic Masked Image Modeling framework (AMIM), which is compatible with both Transformers and CNNs in a unified way. Extensive experiments on popular benchmarks show that our AMIM learns better representations without explicit design and endows the backbone model with the stronger capability to transfer to various downstream tasks for both Transformers and CNNs.
1 INTRODUCTION
Supervised deep learning with large-scale annotated data has witnessed an explosion of success in computer vision (CV) (Krizhevsky et al., 2012a; He et al., 2016) and natural language processing (NLP) (Vaswani et al., 2017). However, a large number of high-quality annotations are not always available in real-world applications. Learning representations without supervision by leveraging pre-text tasks has become increasingly popular.
In CV, early self-supervised learning approaches (Zhang et al., 2016; Doersch et al., 2015; Gidaris et al., 2018) aim to capture invariant features through predicting transformations applied to the same image. However, these methods rely on vision ad-hoc heuristics, and the learned representations are less generic for downstream tasks. Recently, contrastive learning-based approaches (Tian et al., 2020; Chen et al., 2020b; He et al., 2020) have witnessed significant progress, even outperforming supervised methods on several downstream tasks (Chen et al., 2020c; Grill et al., 2020; Zbontar et al., 2021). More recently, inspired by masked autoencoding methods (Radford et al., 2018; Devlin et al., 2018) in NLP, Masked Image Modeling (MIM) methods (Bao et al., 2022; He et al., 2022; Wei et al., 2021; Xie et al., 2021b) have brought about new advances for self-supervised pre-training on CV tasks. The transition from human language understanding to NLP masked autoencoding is quite natural because the filling of missing words in a sentence requires relatively comprehensive semantic understanding. In analogy, humans can understand and imagine masked content by visually filling the missing structures in an image containing occluded parts.
Different from contrastive learning, which yields a clustering effect from pre-training by pulling similar samples and pushing away dissimilar samples, MIM pre-training methods have not been extensively explored in the context of the expected knowledge learned or how this knowledge is acquired. Existing works mainly focus on improving downstream tasks performance via explicit design such as trying different prediction target (Wei et al., 2021), adopting pre-trained tokenizer (Zhou et al., 2021), utilizing complex Transformer decoder (He et al., 2022) or combining with contrastive learning (El-Nouby et al., 2021). Moreover, the success of existing MIM methods is largely confined to Vision Transformer (ViT) structures (Dosovitskiy et al., 2021) since it leads to under-performing performance to directly apply mask token (Devlin et al., 2018) and positional embedding to CNNs.
In this work, we carry out systematic experiments and show that MIM as a pre-training task essentially teaches the model to learn better middle-order interactions between patches for more generalized feature extraction regardless of the underlying network structure. Compared to the local texture features learned by low-order interactions between patches, more complex features such as shape and edge could be extracted via middle-order interactions among patches. The interaction of patches could be considered as information fusion via both the convolution operation of a CNN and the self-attention mechanism of a Transformer. That is to say, CNN and Transformer should both benefit from better middle-order interactions with MIM as the pre-text task.
To bridge the gap of MIM in terms of network architectures based on our extensive experimental analysis, we propose an Architecture-Agnostic Masked Image Modeling framework (A2MIM) that focuses on enhancing the middle-order interaction capabilities of the network. Specifically, we mask the input image with the mean RGB value and place the mask token at intermediate feature maps of the network. In addition, we propose a loss in the Fourier domain to further enhance the middle-order interaction capability of the network. Our contributions are summarized as follows:
• We conducted systematic experiments and showed the essence of MIM is to better learn middle-order interactions between patches but not reconstruction quality.
• We proposed a novel MIM-based framework dubbed A2MIM that bridges the gap between CNNs and Transformers. We are also the first to perform MIM on CNNs without adopting designs native to ViTs that outperforms contrastive learning counterparts.
• Extensive experiments with both Transformers and CNNs on ImageNet-1K and public benchmarks for various downstream tasks show that our method achieves performance improvement on pre-trained representation quality than state-of-the-art methods.
2 RELATED WORK
Contrastive Learning. Contrastive learning learns instance-level discriminative representations by extracting invariant features over distorted views of the same data. MoCo (He et al., 2020) and SimCLR (Chen et al., 2020b) adopted different mechanisms to introduce negative samples for contrast with the positive. BYOL (Grill et al., 2020) and its variants (Chen & He, 2020; Chen et al., 2021) further eliminate the requirement of negative samples to avoid representation collapse. Besides pairwise contrasting, SwAV (Caron et al., 2020) clusters the data while enforcing consistency between multi-augmented views of the same image. Barlow Twins (Zbontar et al., 2021) proposed to measure the cross-correlation matrix of distorted views of the same image to avoid representation collapsing. Meanwhile, some efforts have been made on top of contrastive methods to improve pre-training quality for specific downstream tasks (Xie et al., 2021a; Xiao et al., 2021; Selvaraju et al., 2021; Wu et al., 2022). MoCo.V3 (Chen et al., 2021) and DINO (Caron et al., 2021) adopted ViT (Dosovitskiy et al., 2021) in self-supervised pre-training to replace CNN backbones.
Autoregressive Modeling. Autoencoders (AE) are a typical type of network architecture that allows representation learning with no annotation requirement (Hinton & Zemel, 1993). By forcing denoising property onto the learned representations, denoising autoencoders (Vincent et al., 2008; 2010) are a family of AEs that reconstruct the uncorrected input signal with a corrupted version of the signal as input. Generalizing the notion of denoising autoregressive modeling, masked predictions attracted the attention of both the NLP and CV communities. BERT (Devlin et al., 2018) performs masked language modeling (MLM) where the task is to classify the randomly masked input tokens. Representations learned by BERT as pre-training generalize well to various downstream tasks. For CV, inpainting tasks (Pathak et al., 2016) to predict large missing regions using CNN encoders and colorization tasks (Zhang et al., 2016) to reconstruct the original color of images with removed color channels are proposed to learn representation without supervision. With the introduction of Vision Transformers (ViT) (Dosovitskiy et al., 2021; Liu et al., 2021), iGPT (Chen et al., 2020a) predicts succeeding pixels given a sequence of pixels as input. MAE (He et al., 2022) and BEiT (Bao et al., 2022) randomly mask out input image patches and reconstruct the missing patches with ViTs. Compared to MAE, MaskFeat (Wei et al., 2021) and SimMIM (Xie et al., 2021b) adopt linear layers as the decoder instead of another Transformer as in MAE. MaskFeat applied HOG as the prediction target instead of the RGB value. Other research endeavors (El-Nouby et al., 2021; Zhou et al., 2021; Assran et al., 2022; Akbari et al., 2021; Sameni et al., 2022) combine the idea of contrastive learning
(CL) with MIM. SplitMask (El-Nouby et al., 2021) proposed to use half of the image pixels to predict the other half while applying InfoNCE loss (Van den Oord et al., 2018) across the corresponding latent features. MSN (Assran et al., 2022) matches the representation of an image view containing randomly masked patches and the original unmasked image. Similarly, iBOT (Zhou et al., 2021) adopts the Siamese framework to combine self-distillation with MIM. Moreover, Data2Vec (Baevski et al., 2022) proposed a framework that applies the masked prediction idea for either speech, NLP, or CV. However, most MIM works are confined to ViT architectures, recently proposed CIM (Fang et al., 2022) adopts the output of a pre-trained tokenizer as target and takes the prediction of a frozen BEiT as input to the encoder as a workaround to enable MIM on CNNs. In this work, we propose A2MIM with no components native to ViTs adopted to perform MIM with ViTs and CNNs.
3 INTRIGUING PROPERTIES OF MASKED IMAGE MODELING
3.1 IS MIM BETTER IMAGE AUGMENTATION?
Compared to CNN, Transformers gain tremendous performance improvement with carefully designed image augmentation techniques such as RandAug(Cubuk et al., 2020), CutMix(Yun et al., 2019) and random erasing(Zhong et al., 2020). Random erasing(Zhong et al., 2020) randomly removes part of the image and replaces it with Gaussian noise, while Cutmix randomly removes part of the image and replaces the corresponding region with a patch from another image. Similarly, as in most MIM pre-training tasks, some image patches are masked out and replaced with a learnable mask token. Noticing the resemblance of the masking operations, we hypothesize that MIM as a pre-training task and masking-based data augmentations enhance the network’s robustness towards occlusion, enabling the network with a more generalized feature extraction ability. To verify our hypothesis, we design an occlusion robustness test. Let x ∈ R3×H×W be an input image and y ∈ RC be its corresponding label, where C is the class number. Consider a classification task y = f(x) where f denotes a neural network, the network is considered robust if the network outputs the correct label given an occluded version of the image x′, namely y = f(x′). For occlusion, we consider the patch-based random masking as adopted in most MIM works (He et al., 2022; Xie et al., 2021b; Wei et al., 2021). In particular, we split the image of size 224× 224 into patch size 16× 16 and randomly maskM patches out of the total number ofN patches. The occlusion ratio could then be defined as MN . We conduct experiments on ImageNet-100 (IN-100) (Krizhevsky et al., 2012b) for both Transformer and CNN with different settings. We choose ViT-S (Dosovitskiy et al., 2021) and ResNet-50(He et al., 2016) as the network architecture. Robustness is compared under the following settings: (i) random weight initialization with no image augmentation applied; (ii) random weight initialization with different image augmentations applied; (iii) MIM pre-training as weight initialization with and without image augmentations applied. In Fig. 1, we report the average top-1 accuracy across five runs trained with different settings under various occlusion ratios. Fig. 1(a) and 1(b) show that both MIM and patch-removing alike augmentations significantly improve model occlusion robustness for both ViT-S and ResNet-50. Nevertheless, MIM yields more robust feature extraction than adopting augmentations. Although MIM and patch-removing alike augmentations share similar masking mechanisms, MIM explicitly forces the model to learn the interactions between patches in order to reconstruct missing patches enabling more robust feature extraction. Comparing Fig. 1(a) and 1(b), the convex trend of accuracy from ViT-S indicates better robustness than the concave trend from ResNet-50. The self-attention mechanism of ViTs is able to model the interactions between patches with high degrees of freedom compared to CNNs constrained by convolution priors. We claim that
the success of MIM on ViTs can be seen as resonance in terms of better patch interactions imposed by MIM while supported by the self-attention mechanism of ViTs.
3.2 MIDDLE-ORDER INTERACTIONS FOR GENERALIZED FEATURE EXTRACTION
Next, we show that MIM essentially enables better middle-order interactions between patches. Note that existing MIM works adopt a medium or high masking ratio (Xie et al., 2021b; He et al., 2022) (e.g., 60% or 70%, see Fig. 2) during pre-training, and in these settings, the pairwise interactions between patches are under a middle-size context measured by the order m. Early inpainting work based on CNN (Pathak et al., 2016) resembles MIM but attracts little attention due to the much inferior performance to contrastive learning methods. The inpainting task adopts the masking strategy as illustrated in Fig. 1(c), which masks a full large region instead of random small patches. Such masking mechanisms ignore patch interaction and focus only on reconstruction leading to poor, learned representation quality. To investigate whether MIM makes the model more sensitive to patch interactions of some particular orders, we resort to the tool of multi-order interactions introduced by (Deng et al., 2022; Zhang et al., 2020). Intuitively, mth-order interactions of patches refer to inference patterns (deep features) induced from m number of patches of the original image in the input space. With a small value of m (low-order interactions), the model simply learns local features such as texture. Formally, the multi-order interaction I(m)(i, j) is to measure the order of interactions between patches i and j. We define I(m)(i, j) to be the average interaction utility between patches i and j on all contexts consisting of m patches. m indicates the order of contextual complexity of the interaction. Mathematically, given an input image x with a set of n patches N = {1, . . . , n} (e.g., an image with n pixels), the multi-order interaction I(m)(i, j) is defined as:
I(m)(i, j) = ES⊆N\{i,j},|S|=m[∆f(i, j, S)], (1)
where ∆f(i, j, S) = f(S ∪ {i, j}) − f(S ∪ {i}) − f(S ∪ {j}) + f(S). f(S) indicates the score of output with patches in N \ S kept unchanged but replaced with the baseline value (Ancona et al., 2019), where the context S ⊆ N . See Appendix B.2 for details. To measure the interaction complexity of the neural network, we measure the relative interaction strength J (m) of the encoded m-th order interaction as follow:
J (m) = Ex∈ΩEi,j |I(m)(i, j|x)|
Em′Ex∈ΩEi,j |I(m ′ )(i, j|x)|
, (2)
where Ω is the set of all samples and 0 ≤ m ≥ n − 2. J (m) is the average value over all possible pairs of patches of input samples. J (m) is normalized by the average value of all interaction strengths. J (m) then indicates the distribution (area under curve sums up to one) of the order of interactions of the network. In this work, we use J (m) as the metric to evaluate and analyze interaction orders of the network with MIM pre-training. We conduct experiments on IN-100 with image size 224× 224 and use ViT-S (Dosovitskiy et al., 2021) and ResNet-50 (He et al., 2016) as the network architecture. We consider a patch of size 16× 16 as an input patch. For the computation of J (m), we adopt the sampling solution following previous works (Deng et al., 2022; Zhang et al., 2020). As can be seen from Fig. 1(c) that ViT-S with random weight initialization tends to learn simple interactions with few patches (e.g., less than 0.05n patches) while MIM pre-trained models show a stronger interaction for relative middle-order (from 0.05n to 0.5n). Similarly, as observed from 1(d), MIM pre-trained
ResNet-50 enhances the middle-order interactions from 0.1n to 0.55n compared to random initialized models. Stronger middle-order interactions form more complex features such as shape and edge compared to local texture features learned from low-order interactions (Naseer et al., 2021).
4 APPROACH
We propose a generic MIM framework following two design rules: (a) No complex or non-generic designs are adopted to ensure compatibility with all network architectures. (b) Better middleorder interactions between patches for more generalized feature extraction. Figure 3 highlights the difference between our proposed framework and existing MIM frameworks in terms of three key components: masking strategy, encoder/decoder architecture design and prediction targets.
4.1 ARCHITECTURE AGNOSTIC FRAMEWORK
Mask Where Middle-order Interactions Occur. Existing works (El-Nouby et al., 2021; He et al., 2022; Xie et al., 2021b; Wei et al., 2021) adopt the masking strategy where the input image is divided into non-overlapping patches, and a random subset of patches are masked. MAE utilizes a Transformer as a decoder and takes only the visible patches into the encoder. Masked tokens are appended to the decoder to reconstruct the masked patches. SimMIM (Xie et al., 2021b) and MaskFeat (Wei et al., 2021) utilize a fully connected layer as the decoder and feed the mask token into the encoder together with the visible patches. The mask token (Devlin et al., 2018) is a token-shared learnable parameter that indicates the presence of missing patches to be predicted. Despite different choices of decoder structures, the mask token is either placed at the input to the encoder or the decoder. Mathematically, the masking process of MIM is defined as xmask = x (1−M) + T M , where M is the random occlusion mask, and T represents the learnable mask token. Such masking at the patch embedding layer aligns with the attention mechanism of Transformers, which is robust against occlusion. However, masking at the stem layer undermines the context extraction capability of CNN, which relies on local inductive biases. Moreover, masking at input stages of the network leads to loworder interactions. Thus, we propose to mask intermediate features where the output feature contains both the semantic and spatial information and the mask token can encode interactions with the medium number of tokens. More concretely, our masking operation is defined as zlmask = z
l + T D(M), where zl is the intermediate feature map of x at layer-l in the Transformer encoder (or for stage-l in CNNs) and D(·) is the corresponding down-sampling function of the occlusion mask.
Filling Masked Tokens with RGB Mean. It is worth noting that existing works directly replace the occluded patches with the mask token in the input space or after the patch embedding (Bao et al., 2022; Xie et al., 2021b). In contrast, we use the average RGB value to fill the occluded patches as the input to the encoder and add the mask token onto the intermediate feature maps of the encoder. The masking mechanism originates from NLP where languages are of high-level semantics and do not require low-level feature extraction as image processing. The introduction of a zero mask at the early stages of the network where low-level feature extraction happens is harmful in terms of feature extraction. From the view of Fourier domain, the RGB mean is the DC component of images. It not only brings about minimum local statistics variation caused by the masking operation but also forces the network to model rather more informative medium frequencies instead of filling the mask patches with blurry color blocks of low frequencies. The proposed masking strategy is generic to both convolution and self-attention in that it accommodates low-level to semantic-level feature extraction.
4.2 MIDDLE-ORDER INTERACTIONS FROM FOURIER PERSPECTIVE
Current works (El-Nouby et al., 2021; He et al., 2022; Xie et al., 2021b) adopt raw RGB values as the prediction target. However, raw pixels in the spatial domain are heavily redundant and often contain low-order statistics (Bao et al., 2022; Wei et al., 2021; Zhou et al., 2021). MaskFeat (Wei et al., 2021) adopts the Histogram of Oriented Gradients (HOG) as the prediction target outperforming MAE and SimMIM. HOG is a discrete descriptor of medium or high-frequency features which captures shape patterns based on middle-order interactions. ViTs and CNNs have low-pass and high-pass filtering properties, respectively (Park & Kim, 2022; 2021). ViTs and CNNs have certain frequency bands that they each cannot model well, and both cannot model middle-order interactions well (detailed in Appendix B.3). The observation of the medium frequency descriptor HOG improves middle-order interactions and leads to the hypothesis that learning medium frequencies would help the model learn more middle-order interactions. Given a RGB image x ∈ R3×H×W , the discrete Fourier transform (DFT) of each channel is defined as:
F(u,v) = h=H∑ h=1 w=W∑ w=1 x(h,w)e−2πj( uh H + vw W ). (3)
In addition to the common MIM loss in the spatial domain Lspa, we propose Lfreq in Fourier domain:
Lfreq = c=3∑ c=1 u=H∑ u=1 w=W∑ w=1 ω(u, v) ∥∥DFT(xpredc M + de(xpredc ) (1−M))−DFT(xc)∥∥ , (4)
where xpred is the predicted image, de(·) is detach gradient operation, and ω(u, v) is the frequency weighting matrix. ω(u, v) enables both ViTs and CNNs to model features of medium frequencies rather than local textures and noise corresponding to high frequencies. Inspired by Focal Frequency loss (Jiang et al., 2021), we define adaptive ω(u, v) as follows:
ω(u, v) = ∥∥DFT(xpredc M + det(xpredc ) (1−M))−DFT(xc)∥∥α , (5)
where α is a scaling factor, and we set α = 1. Fig. B.3 verifies that Eq. (5) allows the model to learn previously ignored frequencies (mostly the medium frequency components). Note that Lfreq introduces negligible overhead by using Fast Fourier Transform (FFT) algorithms with O(n log n) complexity to achieve DFT. The overall loss function of A2MIM is then defined as:
L = Lspa + λLfreq, (6) where Lspa = ∥∥xpred − x∥∥ M and λ is a loss weighting parameter. We set λ to 0.5 by default.
5 EXPERIMENTS
5.1 PRE-TRAINING SETUP
We adopt ResNet-50 (He et al., 2016) and Vision Transformer (Dosovitskiy et al., 2021) (ViTS/16 and ViT-B/16) as the backbone. We pre-train on ImageNet-1K (IN-1K) training set with AdamW (Loshchilov & Hutter, 2019) optimizer with a basic learning rate of 1.5× 10−4 adjusted by
a cosine learning rate scheduler and a batch size of 2048. The input image size is 224× 224 with a patch size of 32× 32. We use a random masking ratio of 60%. By default, the learnable mask tokens are placed at stage-3 in ResNet-50 and layer-5/layer-8 in ViT-S/ViT-B, respectively. We adopt a linear prediction head as the decoder (Xie et al., 2021b). A2MIM+ indicates adopting HOG as supervision and using the MLP decoder with depth-wise (DW) convolution. Our experiments are implemented on OpenMixup (Li et al., 2022) by Pytorch and conducted on workstations with NVIDIA V100 GPUs. We report the average results of 3 trials for all experiments and use bold and underline to indicate the best and the second-best performance. See Appendix A for detailed pre-training settings.
5.2 IMAGE CLASSIFICATION ON IMAGENET-1K
Evaluation Protocols. We first evaluate the learned representation by end-to-end fine-tuning (FT) and linear probing (Lin.) protocols on IN-1K. For evaluation on CNN, we adopt RSB A2/A3 (Wightman et al., 2021) training settings for fine-tuning on ResNet-50, which employs LAMB (You et al., 2020) optimizer with a cosine scheduler for 300/100 epochs. For the linear probing setting on ResNet-50, we freeze the backbone features and train a linear classifier with an initial learning rate of 30 and batch size of 256 following MoCo (He et al., 2020). For evaluation on Transformer, we employ the fine-tuning as MAE (He et al., 2022), which uses DeiT (Touvron et al., 2021) augmentation setting, an AdamW optimizer for 100-epoch training, and adopt a layer-wise learning rate decay of 0.65 following (Bao et al., 2022). See Appendix A for detailed evaluation configurations.
ResNet-50. We compare the proposed A2MIM with classical self-supervised learning methods (Inpainting (Pathak et al., 2016), Relative-Loc (Doersch et al., 2015), and Rotation (Gidaris et al., 2018)), contrastive learning (CL), and MIM methods with 100/300 pre-training epochs. We modified MIM methods to run them on ResNet-50: the learnable mask token is employed to the encoder of BEiT (Bao et al., 2022), Data2Vec (Baevski et al., 2022), and SimMIM (Xie et al., 2021b) after the
Table 3: Performance of object detection and semantic segmentation tasks based on ResNet50 on COCO and ADE20K.
Method Epochs COCO ADE-20K APbox APmask mIoU PyTorch (Sup.) 120 38.2 33.3 36.1 SimCLR 800 37.9 33.3 37.6 MoCoV2 400 39.2 34.3 37.5 BYOL 400 38.9 34.2 37.2 SwAV 800 38.4 33.8 37.3 SimSiam 400 39.2 34.4 37.2 Balow Twins 800 39.2 34.3 37.3 SimMIM‡ 300 39.1 34.2 37.4 CIM 300 - - 38.0 A2MIM 300 39.8 34.9 38.3
Table 4: Performance of object detection and semantic segmentation tasks based on ViT-B on COCO and ADE-20K.
Method Supervision Epochs COCO ADE-20K APbox APmask mIoU DeiT (Sup.) Label 300 47.9 42.9 47.0 MoCoV3 CL 300 47.9 42.7 47.3 DINO CL 400 46.8 41.5 47.2 BEiT DALLE 300 43.1 38.2 47.1 iBOT Momentum 400 48.4 42.7 48.0 MAE RGB 1600 48.5 42.8 48.1 MaskFeat HoG 800 49.2 43.2 48.8 SimMIM RGB 800 48.9 43.0 48.4 CAE DALLE 800 49.2 43.3 48.8 A2MIM RGB 800 49.4 43.5 49.0
stem (the output feature of 56× 56 resolutions); the encoder of MAE randomly selects 25% from 56× 56 output features of the stem as unmasked patches and takes the reorganized 28× 28 patches as the input of four stages. As shown in Tab. 1, our approach achieves competitive performance with state-of-the-art contrastive-based methods under 100-epoch RSB A3 fine-tuning. Note that MIM methods see fewer training samples per epoch than CL methods (40% vs. 200% of patches) and usually require longer pre-training epochs. Based on a longer fine-tuning evaluation using RSB A2, our method (300-epoch) outperforms contrastive-based methods with even fewer training epochs. Meanwhile, A2MIM also improves the baseline SimMIM† (+0.8%) and the concurrent work CIM (+0.4%) in terms of RSB A3 fine-tuning for the longer pre-training. Besides, we also report the linear probing accuracy in the fast pre-training for reference, although our main focus is to learn representations with better fine-tuning performances. The linear probing performance of our method is lower than contrastive-based methods, it still improves the baseline by 0.6%.
ViT. We then evaluate A2MIM based on ViT-S/B in Tab. 2. We list the supervision target used by various pre-training methods in the second column of Tab. 2. DALL-E (Ramesh et al., 2021) and VQGAN (Esser et al., 2021) are pre-trained image tokenizers, while momentum refers to the momentum encoder. Our approach outperforms current state-of-the-art methods with complex supervision, e.g., SplitMask (MIM with CL combined), iBOT (complex teacher-student architecture), and CIM (pre-trained BEiT as supervision). Based on ViT-S/B, A2MIM improves the baseline SimMIM by 0.5%/0.4% with RGB as supervision and 0.7%/0.7% with the HOG feature as supervision.
5.3 TRANSFER LEARNING EXPERIMENTS
Object detection and segmentation on COCO. To verify the transferring abilities, we benchmark CL and MIM methods on object detection and segmentation with COCO (Lin et al., 2014). For evaluation on CNN, we follow the setup in MoCo, which fine-tunes Mask R-CNN (He et al., 2017) with ResNet-50-C4 backbone using 2× schedule on the COCO train2017 and evaluates on the COCO val2017. Results in Tab. 3 indicate that our approach (300-epoch) outperforms contrastivebased methods with longer pre-training (+0.7% APbox and +0.6% APmask). For evaluation on Transformer, we follow MAE and CAE, which efficiently fine-tunes Mask R-CNN with ViT-B backbone using 1× schedule. In Tab. 4, our approach (800-epoch) is superior to popular contrastivebased and MIM methods, e.g., outperforms MAE (1600-epoch) by 0.9% APbox and 0.8% APmask.
Semantic segmentation on ADE20K. We then evaluate the transferring performances on semantic segmentation with ADE20K (Zhou et al., 2019) by fine-tuning UperNet (Xiao et al., 2018). Based on ResNet-50, all CNN models are fine-tuned for 160K iterations with SGD following MoCo. Results in Tab. 3 show that our method outperforms CL methods by at least 0.9% mIoU and outperforms CIM (required extra pre-trained BEiT (Bao et al., 2022)) by 0.3% mIoU. Based on ViT-B, we fine-tune models for 80K iterations with AdamW following MAE. Tab. 4 shows that our approach consistently improves MIM methods (e.g., outperforms MAE and SimMIM by 0.9% and 0.6% mIoU).
5.4 ABLATION STUDY
We next verify the effectiveness of the proposed components. Ablation studies are conducted with ResNet-50 and ViTs on IN-100 and IN-1K using the fine-tuning protocol. Based on the modified baseline SimMIM (Lspa), we first compare different mask token mechanisms: Replacing denotes the original way in most MIM methods, and Addition denotes our proposed way that adds the mask token to intermediate feature maps of the backbone. As shown in Fig. 5, adding the mask token to the medium stages (stage-3) or layers (layer-5) yields the best performance. Replacing masked patches in input images by RGB mean value slightly improves the baseline SimMIM, especially for ResNet-50 (88.19 vs. 87.75 on IN-100). Then, we verify the proposed Lfreq in Tab. 5. We find that simply using Lfreq without the adaptive re-weighting ω (Eqn. 5) brings limited improvements as the frequency constraint to Lspa, while employing ω further enhances performances by helping the model to learn more informative frequency components. Additionally, we visualize reconstruction results in Fig. 4 to show the improvements brought by our proposed components (more results in Appendix B).
5.5 VERIFICATION OF A2MIM DESIGN RULES Table 6: Analysis of the scalability A2MIM for advanced components on IN-1K.
Module ResNet-50 ViT-B Linear 78.8 82.4
2-layer MLP 78.8 82.4 Decoder 2-layer MLP (w/ DW) 78.9 82.5
2-layer Transformer 78.6 82.3 2-layer Transformer (w/ DW) 78.8 82.6
RGB 78.8 82.4 Target HoG Feature 78.9 82.6
DINO Feature 78.9 82.7 We verify whether A2MIM meets the intended design rules using the same experiment settings as Sec. 5.4: (i) A2MIM is generic to incorporate advanced components proposed in previous works (e.g., complex decoders, advanced prediction targets). As for the decoder structure, we replace the original linear decoder with 2-layer MLP or Transformer decoders, but find limited improvements or degenerated performances (similar to SimMIM) in Tab. 6. Inspired by PVT.V2 (Wang et al., 2022), we introduce a depth-wise (DW) convolution layer (w/ DW) to the MLP decoder (adding a 5×5 DW layer in between) and the Transformer decoder (adding a 3× 3 DW layer in each FFN (Wang et al., 2022)), which brings improvements compared to the linear decoder. As for the prediction target, we follow MaskFeat to change the RGB target to the HoG feature or the output feature from ViT-B/16 pre-trained 1600-epoch by DINO (Caron et al., 2021). Tab. 6 shows that using advanced targets significantly improves the performance of A2MIM for both ResNet-50 and ViT-B. Therefore, we can conclude A2MIM is a generally applicable framework. (ii) A2MIM enhances occlusion robustness and middle-order interaction among patches from experiments on ImageNet-1K in Fig. A3.
6 CONCLUSION
In this paper, we delved deep into MIM and answered the question of what exactly is learned during MIM pre-training. We adopted multi-order interactions to study the interaction order among image patches. We discovered that MIM essentially teaches the network to learn middle-order interactions among image patches for more complex feature extraction regardless of the network architecture. Based on our findings, we further proposed a general framework A2MIM that is compatible with both Transformers and CNNs for MIM tasks aiming at enhancing patch interactions during self-supervised pre-training. Besides a different mask token mechanism, we proposed a loss in the Fourier domain to better learn the middle-order interaction. Experimental results have shown that our proposed framework improves the representations learned for both CNNs and Transformers yielding superior performance than state-of-the-arts on various downstream tasks.
A DETAILS OF COMPARISON EXPERIMENTS
This section provides experimental details for Sec. 5, e.g., pre-training and evaluation on ImageNet-1K and transfer learning settings on downstream tasks.
A.1 IMAGENET-1K EXPERIMENTS
Pre-training. The default settings of A2MIM for ResNet-50 and ViTs are provided in Tab. A1, following SimMIM (Xie et al., 2021b). We use AdamW (Loshchilov & Hutter, 2019) optimizer with the cosine scheduler and the linear learning rate scaling rule (Goyal et al., 2020): lr = base lr×batchsize / 256. Similar to current MIM methods, we only use RandomResizedCrop with the scale of (0.67, 1.0) and do not employ other complex augmentations (e.g., Rand Augment (Cubuk et al., 2020), mixups (Yun et al., 2019), or stochastic depth) during pre-training. As for ViTs, we adopt Cosine decay for 100 and 300 epochs pre-training while using Step decay (the learning rate multiplied 0.1 at 700-epoch) for 800-epoch pre-training.
End-to-end fine-tuning. Our fine-tuning settings follow common practices of supervised image classification on ImageNet-1K. As shown in Tab. A2, we fine-tune pre-trained ViTs for 100 epochs using the DeiT (Touvron et al., 2021) training recipe, which employs AdamW (Loshchilov & Hutter, 2019) optimizer with the cross-entropy (CE) loss; we fine-tune pre-trained ResNet-50 for 100/300 epochs using RSB A3/A2 (Wightman et al., 2021) settings, which employs LAMB (You et al., 2020) optimizer with the binary cross-entropy (BCE) loss. Additionally, we use layer-wise learning rate decay as (Bao et al., 2022) for fine-tuning ViT models.
Table A1: ImageNet-1K A2MIM pre-training settings for ResNet-50 and ViT models.
Configuration ResNet-50 ViTs Pre-training resolution 224× 224 224× 224 Mask patch size 32× 32 32× 32 Mask ratio 60% 60% Optimizer AdamW AdamW Base learning rate 1.5× 10−4 1× 10−4 Weight decay 0.05 0.05 Optimizer momentum β1, β2=0.9, 0.999 β1, β2=0.9, 0.999 Batch size 2048 2048 Learning rate schedule Cosine Cosine / Step Warmup epochs 10 10 RandomResizedCrop 3 3 Rand Augment 7 7 Stochastic Depth 7 7 Gradient Clipping 7 max norm= 5
Table A2: ImageNet-1K fine-tuning recipes for ResNet-50 (RSB A2/A3) and ViTs (DeiT).
Configuration ViTs ResNet-50 DeiT RSB A2 RSB A3 FT epochs 100 300 100 Training resolution 224 224 160 Testing resolution 224 224 224 Testing crop ratio 0.875 0.95 0.95 Optimizer AdamW LAMB LAMB Base learning rate 2.5× 10−4 1.5× 10−3 1× 10−3 Weight decay 0.05 0.02 0.02 Batch size 1024 2048 2048 Learning rate schedule Cosine Cosine Cosine Warmup epochs 5 5 5 Label smoothing 0.1 7 7 Stochastic depth 0.1 0.05 7 Gradient clipping 5.0 7 7 Rand Augment (9, 0.5) (7, 0.5) (6, 0.5) Mixup alpha 0.8 0.1 0.1 CutMix alpha 1.0 1.0 1.0 Loss function CE loss BCE loss BCE loss
A.2 OBJECT DETECTION AND SEGMENTATION ON COCO
We adopt Mask-RCNN (He et al., 2017) framework to perform transfer learning to object detection and segmentation on COCO (Lin et al., 2014) in Detectron21. For evaluation on ResNet-50, we follow MoCo (He et al., 2020) and fine-tune Mask R-CNN with the pre-trained ResNet-50-C4 backbone using 2× schedule (24 epochs). For evaluation of ViTs, we follow MAE (He et al., 2022), which employs the pre-trained ViT backbone and an FPN neck (Lin et al., 2017) in Mask R-CNN, and fine-tune the model using 1× schedule (12 epochs). For a fair comparison, we follow (Bao et al., 2022; Xie et al., 2021b) to turn on relative position bias in ViT (Dosovitskiy et al., 2021) during both pre-training and transfer learning, initialized as zero.
A.3 SEMANTIC SEGMENTATION ON ADE-20K
We adopt UperNet (Xiao et al., 2018) to perform transfer learning to semantic segmentation on ADE-20K and use the semantic segmentation implementation in MMSegmentation2. We initialize
1https://github.com/facebookresearch/detectron2 2https://github.com/open-mmlab/mmsegmentation
0 10 20 30 40 50 60 70 80 90 100
Occlusion ratio (%)
0
20
40
60
80
To p
1 Ac
cu ra
cy (%
)
ViT S Random PatchDrop
BYOL + FT (DeiT) MoCoV3 + FT (DeiT) MoCoV3 + FT (Vanilla) MAE + FT (DeiT) SimMIM + FT (DeiT) SimMIM + FT (Vanilla)
(a)
0 10 20 30 40 50 60 70 80 90 100
Occlusion ratio (%)
0
20
40
60
80
To p
1 Ac
cu ra
cy (%
)
ResNet50 Random PatchDrop BYOL + FT (DeiT) BYOL + FT (Vanilla) MoCoV3 + FT (DeiT) SimMIM + FT (DeiT) SimMIM + FT (Vanilla)
(b)
0 0.1 0.3 0.5 0.7 0.9 1.0 Order / n
0
1
2
3
4
5
6
In ter
ac tio
n St
re ng
th ViT S Interaction Strength of Order BYOL + FT (Vanilla) MoCoV3 + FT (Vanilla) MAE + FT (Vanilla) SimMIM + FT (Vanilla)
(c)
0 0.1 0.3 0.5 0.7 0.9 1.0 Order / n
0.0
0.5
1.0
1.5
2.0
2.5
3.0
3.5
In ter
ac tio
n St
re ng
th
ResNet50 Interaction Strength of Order BYOL + FT (Vanilla) MoCoV3 + FT (Vanilla) SimMIM + FT (Vanilla)
(d)
Figure A1: (a)(b): Robustness against different occlusion ratios of images (CL vs. MIM) is studied for both ViT-S and ResNet-50 on ImageNet-100. (c)(d): Distributions of the interaction strength J (m) (CL vs. MIM) are explored for both ViT-S and ResNet-50 on ImageNet-100. The label indicates the pre-training method + fine-tuning augmentation used, random stands for random weight initialization.
the UperNet using the pre-trained backbones (ResNet-50 or ViTs) on ImageNet-1K. For ViTs, we fine-tune end-to-end for 80K iterations by AdamW with a batch size of 16. We search a optimal layerwise decay from {0.8, 0.9} and search optimal a learning rate from {1× 10−4, 2× 10−4, 3× 10−4} for all competitors. Similar to fine-tuning settings on COCO, we use relative position bias in ViT (Dosovitskiy et al., 2021) during both pre-training and transfer learning as (Bao et al., 2022; Xie et al., 2021b). For ResNet-50, we follow MoCo (He et al., 2020), i.e., all CNN models are fine-tuned for 160K iterations by SGD with the momentum of 0.9 and a batch size of 16.
B EMPIRICAL EXPERIMENTS
This section provides background information and experimental details for Sec. 3. We also provide additional results of occlusion robustness evaluation and multi-order interaction strength.
B.1 OCCLUSION ROBUSTNESS
In Sec. 3.1, we analyze robustness against occlusion of fine-tuned models on ImageNet-100 (a subset on ImageNet-1K divided by (Tian et al., 2020)) using the official implementation3 provided by Naseer et al. (2021). Both MIM and contrastive-based methods are pre-trained 400 epochs on ImageNet-100 using their pre-training settings on ImageNet-1K. We adopt the fine-tuning training recipe as DeiT in Tab. A2 and use the same setting (100-epoch) for both ViT-S and ResNet-50. Note that we use the modified SimMIM for ResNet-50 (replacing masked patches in the input image with the RGB mean) in all experiments.
As shown in Fig. 1 and A1, we compared MIM pre-trained models supervised methods with various augmentations and contrastive learning pre-trained methods in terms of the top-1 accuracy under various occlusion ratios. We find that MIM methods show better occlusion robustness on both Transformers and CNNs. In addition to Sec. 3.1, we also provide results of salient occlusion for ViT-S and ResNet-50 on ImageNet-100 in Fig. A2. Note that the occlusion ratio means the ratio of dropped and total patches and we plot the mean of accuracy across 3 runs. We can conclude that MIM pre-trained models have stronger robustness against random and salient occlusions than supervised and contrastive-based methods.
B.2 MULTI-ORDER INTERACTION
In Sec. 3.2, we interpret what is learned by MIM by multi-order interaction (Deng et al., 2022; Zhang et al., 2020). The interaction complexity can be represented by I(m)(i, j) (defined in Eqn. 1), which measures the average interaction utility between variables i, j on all contexts consisting ofm variables. Notice that the order m reflects the contextual complexity of the interaction I(m)(i, j). For example, a low-order interaction (e.g., m = 0.05n) means the relatively simple collaboration between variables i, j, while a high-order interaction (e.g., m = 0.05n) corresponds to the complex collaboration. As figured out in the representation bottleneck (Deng et al., 2022), deep neural networks (DNNs) are more likely to encode both low-order interactions and high-order interactions, but often fail to learn middle-order interactions. We hypothesize that MIM helps models learn more middle-order
3https://github.com/Muzammal-Naseer/Intriguing-Properties-of-Vision-Tra nsformers
0 10 20 30 40 50 60 70 80 90 100
Occlusion ratio (%)
0
20
40
60
80
To p
1 Ac
cu ra
cy (%
)
ViT S Random PatchDrop
BYOL + FT (DeiT) MoCoV3 + FT (DeiT) MAE + FT (DeiT) SimMIM + FT (DeiT) Random + FT (DeiT) Random + FT (Vanilla)
(a)
0 10 20 30 40 50 60 70 80 90 100
Occlusion ratio (%)
0
20
40
60
80
To p
1 Ac
cu ra
cy (%
)
ViT S Salient PatchDrop BYOL + FT (DeiT) MoCoV3 + FT (DeiT) MAE + FT (DeiT) SimMIM + FT (DeiT) Random + FT (DeiT) Random + FT (Vanilla)
(b)
0 10 20 30 40 50 60 70 80 90 100
Occlusion ratio (%)
0
20
40
60
80
To p
1 Ac
cu ra
cy (%
)
ResNet50 Random PatchDrop BYOL + FT (DeiT) MoCoV3 + FT (DeiT) SimMIM + FT (DeiT) Random + FT (DeiT) Random + FT (Vanilla)
(c)
0 10 20 30 40 50 60 70 80 90 100
Occlusion ratio (%)
0
20
40
60
80
To p
1 Ac
cu ra
cy (%
)
ResNet50 Salient PatchDrop BYOL + FT (DeiT) MoCoV3 + FT (DeiT) SimMIM + FT (DeiT) Random + FT (DeiT) Random + FT (Vanilla)
(d) Figure A2: Robustness against various random or salient occlusion ratios of images is studied in (a)(b) for ViT-S, and (c)(d) for ResNet-50 using various experimental settings on ImageNet-100. The label indicates the pre-training method + fine-tuning setting used, random stands for random weight initialization.
0 10 20 30 40 50 60 70 80 90 100
Occlusion ratio (%)
0
20
40
60
80
To p
1 Ac
cu ra
cy (%
)
ViT S Random PatchDrop
MoCoV3 + FT (DeiT) A2MIM + FT (DeiT) SimMIM + FT (DeiT) PyTorch + FT (DeiT)
(a)
0 10 20 30 40 50 60 70 80 90 100
Occlusion ratio (%)
0
20
40
60
80
To p
1 Ac
cu ra
cy (%
)
ResNet50 Random PatchDrop BYOL + FT (A3) MoCoV3 + FT (A3) A2MIM + FT (A3) SimMIM + FT (A3) PyTorch + FT (A3)
(b)
0 0.1 0.3 0.5 0.7 0.9 1.0 Order / n
0.0
0.5
1.0
1.5
2.0
2.5
3.0
3.5
In ter
ac tio
n St
re ng
th ViT S Interaction Strength of Order MoCoV3 + FT (DeiT) A2MIM + FT (DeiT) SimMIM + FT (DeiT) PyTorch + FT (DeiT)
(c)
0 0.1 0.3 0.5 0.7 0.9 1.0 Order / n
0.0
0.5
1.0
1.5
2.0
2.5
3.0
3.5
4.0
In ter
ac tio
n St
re ng
th ResNet50 Interaction Strength of Order BYOL + FT (A3) MoCoV3 + FT (A3) A2MIM + FT (A3) SimMIM + FT (A3) PyTorch + FT (A3)
(d) Figure A3: Verification of robustness and interaction of A2MIM with ViT-S and ResNet-50 on ImageNet-1K. (a)(b): Robustness against different occlusion ratios of images is studied for A2MIM and various methods. (c)(d): Distributions of the interaction strength J (m) are explored.
interactions since MIM has a natural advantage in cases where some parts of the image are masked out. In Fig. 1, we calculate the interaction strength J (m) (defined in Eqn. 2) for fine-tuned models on ImageNet-100 using the official implementation4 provided by Deng et al. (2022). Specially, we use the image of 224× 224 resolution as the input and calculate J (m) on 14× 14 grids, i.e., n = 14× 14. And we set the model output as f(xS) = log
P (ŷ=y|xS) 1−P (ŷ=y|xS) given the masked sample xS , where y
denotes the ground-truth label and P (ŷ = y|xS) denotes the probability of classifying the masked sample xS to the true category.
B.3 MIM FROM FREQUENCY PERSPECTIVE
We first plot the log magnitude of Fourier transformed feature maps of ResNet-50 with different pretraining methods using the tools5 provided by Park & Kim (2022) on ImageNet-1K. Following (Park & Kim, 2022), we first convert feature maps into the frequency domain and represent them on the normalized frequency domain (the highest frequency components are at {−π,+π}). In Fig. 4(a), we report the amplitude ratio of high-frequency components by using ∆ log amplitude. As shown in Fig. 4(a), inpainting and MIM show similar low-pass filtering effects at convolution layers as compared to contrastive learning. This indicates that inpainting and MIM reduce noise and uncertainty induced by high-frequency features. We argue that the reconstruction performance of MIM is mainly related to low or high-order interactions of patches (Deng et al., 2022), while reconstruction performance is not directly related to the learned representation quality. Then, we provide the standard deviation of feature maps by block depth as (Park & Kim, 2022; 2021), which first calculates the feature map variance on the last two dimensions and then averages over the channel dimension for the whole dataset. Fig. 4(b) shows the feature variance of each layer of ResNet-50 with different pre-training methods on IN-1K. This figure indicates that MIM tends to reduce the feature map variance, and conversely, supervised training, inpainting, and contrastive learning based on CNN tend to increase variance. Compared to MIM, which learns better middle-order interactions, the inpainting task fails to filter out low-order interactions and thus leads to higher variance. To conclude, MIM methods learn middle-order interactions and reduce the feature map uncertainty (high frequencies) based on the CNN encoder for a generalized and stabilized feature extraction.
4https://github.com/Nebularaid2000/bottleneck 5https://github.com/xxxnell/how-do-vits-work
0.0 0.2 0.4 0.6 0.8 1.0 Normalized Depth
-7.0
-6.0
-5.0
-4.0
-3.0
-2.0
-1.0
Lo g
Am pl
itu de
BYOL MoCoV3 Inpainting SimMIM DeiT (Sup. ) Random
(a)
0.0 0.2 0.4 0.6 0.8 1.0 Normalized Depth
0.00
1.00
2.00
3.00
4.00
Fe at
ur e
M ap
Va ria
nc e
BYOL MoCoV3 Inpainting SimMIM DeiT (Sup. ) Random
(b) Figure A4: (a) Fourier transformed feature maps. The vertical axis is the relative log amplitudes of the high-frequency components, and the horizontal axis is the normalized depth of the network. The blue columns indicate the pooling layers, while the white columns indicate the convolution layers. (b) Feature maps variance. The vertical axis is the average variance value of feature maps. DeiT (Sup.) is supervised pre-training. The results of the randomly initialized network are plotted for reference.
Fo x
Raw image Prediction image
Fourier spectrum Fourier spectrum
w/o
Loss weight (Fourier spectrum) Figure A5: Visualization of predicted images and Lfreq loss weight in Fourier domain. From the view of the Fourier spectrum, the raw image (left) contains 99% low-frequency components (usually present contents) and rich medium-frequency (structural patterns) and high-frequency components (local details and noises), while the predicted result (middle) provides fewer medium or highfrequency components. Calculated in the Fourier domain, the loss weights (right) of Lfreq w/o w help the model to learn the full spectrum while Lfreq focusing on the low and medium-frequency parts, which are more likely to be low-order or middle-order interactions.
C MORE EXPERIMENT RESULTS
C.1 ABLATION OF THE PROPOSED MODULES
In addition to ablation studies in Sec. 5.4, we provide ablation study on the proposed Lfreq in the Fourier domain, as shown in Figure A5. As we discussed in Sec. 4, we hypothesize that learning medium frequencies would help better learn middle-order interactions. we thereby propose Lfreq to tackle the dilemma of Lspa, which tends to learn low-frequency components (i.e., contents reflected by high-order interactions). Although the reconstruction loss in the Fourier domain has a global perception, the high-frequency components are usually constructed by local details and noises (i.e., low-order interactions), which might hurt the generalization abilities. Therefore, we introduce the reweight w(u, v) to force the model to learn more medium-frequency components, which are identical to middle-order interactions. Then, we perform further analysis of the masked patch size for A2MIM in Tab. A3. Note that we pre-train ResNet-50 for 100 epochs and ViT-B for 400 epochs on ImageNet-1K and report the fine-tuning results. As shown in Tab. A3, when the mask ratio is 60%, the optimal masked patch size is 32× 32 for A2MIM, which is the same as SimMIM.
Table A3: Ablation of masked patch size for A2MIM based on ResNet-50 and ViT-B on ImageNet-1K. Model Masked Mask PT Top-1 Accuracy (%)
patch size ratio epoch ResNet-50 8 / 16 / 32 / 64 0.6 100 78.2 / 78.6 / 78.8 / 78.7
ViT-B 8 / 16 / 32 / 64 0.6 400 82.9 / 83.4 / 83.5 / 83.3
C.2 ANALYSIS OCCLUSION ROBUSTNESS AND INTERACTION OF A2MIM
We further analyze occlusion robustness and interaction strength of A2MIM with ViT-S (pre-training 400-epoch) and ResNet-50 (pre-training 100-epoch) on ImageNet-1K, as shown in Fig. A3. Fig. 3(a) and 3(b) shows that A2MIM is more robust to occlusion than the baseline SimMIM and contrastive learning methods with both Transformers and CNNs. Meanwhile, we find that MIM methods learn more balanced interaction strength than both supervised and contrastive learning methods in Fig. 3(c) and 3(d). A2MIM further improves SimMIM by capturing more middle-order interactions (0.2n to 0.6n) with Transformers and CNNs. Therefore, we can conclude that A2MIM helps the model to learn better middle-order interactions between patches for more generalized visual representation.
C.3 SCALING-UP A2MIM
Additionally, we scale up the model size of backbone encoders to verify the performance of A2MIM with ResNet and ViT on ImageNet-1K. As shown in Table A4, our proposed A2MIM and its advanced variant A2MIM+ consistently improve both the contrastive-based and MIM methods on all scale architectures, e.g., A2MIM outperforms SimMIM by 0.5%/0.5%/0.5%/0.2% and 0.6%/0.4% based on ViT-S/B/L/H and ResNet-50/101, demonstrating that A2MIM is an architecture-agnostic and salable framework for MIM pre-training.
Table A4: ImageNet-1K fine-tuning (FT) top-1 accuracy (%) with ResNet (R) and ViT of various model scales. We adopt the 100-epoch fine-tuning protocols for both architectures.
Methods Supervision ViT-S ViT-B ViT-L ViT-H R-50 R-101 Sup. Label 79.9 81.8 82.6 83.1 78.1 79.8 MoCoV3 CL 81.4 83.2 84.1 - 78.7 - DINO CL 81.5 83.6 - - 78.7 - MAE RGB - 83.6 85.9 86.9 77.1 - SimMIM RGB 81.7 83.8 85.6 86.8 78.2 80.0 MaskFeat HoG - 84.0 85.7 - 78.4 - A2MIM RGB 82.2 84.2 86.1 87.0 78.8 80.4 A2MIM+ HoG 82.4 84.5 86.3 87.1 78.9 80.5
D VISUALIZATION EXPERIMENTAL DETAILS
In addition to visualization results in Sec. 5.4, we visualize more reconstruction results of A2MIM here. Similar to Fig. 4, we ablate the proposed components in A2MIM based on ResNet-50 in Fig. A6, which demonstrates that A2MIM helps ResNet-50 learn more spatial details, i.e., more middle-order interactions. Moreover, we study the effects of the mask token in both ViTs and CNNs in Fig. A7.
Raw image
Fo x
C uc
um be
r
Masked imageMasked image
Zero mask RGB mean mask
Ba llo
on
Figure A6: Visualizations of predicted results from SimMIM (middle) and our A2MIM (right) based on ResNet-50 pre-trained 100-epochs on ImageNet-1K. Notice that T (s∗) denotes the mask token T to the optimal stage-s in ResNet-50. We ablate the proposed components by adding them to the baseline SimMIM: replacing the zero mask with the RGB mean mask (the modified SimMIM baseline) and adding the mask token T (s∗) relieve grid-like artifacts in predicted results; adding the proposed Lfreq helps the model to capture more informative details.
Raw image
G ol
df is
h
ViT-B
Ba llo
on
Masked image
ViT-B ResNet-50
Remove learned mask token
Remove learned mask token
Remove learned mask token
Figure A7: Visualizations of predicted results with and without the mask token on ImageNet-1K. Notice that mask tokens are adopted in the pre-trained models based on ViT-S (300-epoch) or ResNet-50 (100-epoch). Based on ViT-S, removing the mask token corrupts both contents of masked patches and overall colors in SimMIM while only corrupting the masked contents in A2MIM. Based on ResNet-50, removing the mask token slightly affects spatial details in the masked patches and causes grid-like artifacts in the unmasked patches. The different effects of the mask token in ViT-S and ResNet-50 might be because the two architectures use different spatial-mixing operators and normalization layers. As for ViTs, the self-attention operation captures informative details from unmasked patches, but the non-overlap patch embedding and layer normalization mask each patch isolated. The mask token learns the mean templates (contents) of masked patches and gathers spatial details from unmasked patches by the self-attention operation. As for CNNs, each patch shares the same contents extracted by batch normalization layers, and the convolution operation extract features from unmasked and masked patches equally. The mask token learns more high-frequency and informative details. | 1. What is the main contribution of the paper, and how does it differ from prior works?
2. What are the strengths and weaknesses of the proposed approach, particularly regarding its application to CNNs?
3. Do you have any concerns about the motivation or assumptions behind the work?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
5. Are there any questions or aspects that the reviewer would like further clarification or explanation on? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
The paper analyzes the essence of MIM and introduces a universal MIM method that can be applied to both CNNs and Transformers. There are several designs that together build the introduced method, including the mean RGB replacement, the intermediate mask, the frequency reconstruction targets and the HOG targets. Experimental results on both ResNet-50 and ViTs are shown.
Strengths And Weaknesses
Strengths
The paper is well-organized and easy to follow.
The figures, tables, and visualizations indeed help to understand the author's point of view.
It is new to apply MIM on convolution neural networks.
Weaknesses
I have the most concern about the motivation for this work. As we know, ViTs lack the capture of inductive bias and MIM solves this issue to some extent by enforcing visual context reasoning. However, CNNs are different, they are good at capturing inductive bias. Furthermore, as observed in the authors' results (Tab. 1), a lot of effort has been done to do MIM pre-training, however, the final results are almost the same as supervised counterparts. So is it a false proposition to apply MIM to CNNs?
I have another concern about the authors' declaration that MIM enables the network with a better feature extraction ability. As a common observation in previous MIM works, the linearly probing results of MIM-pretrained models are unsatisfactory, worse than the contrastive methods. MIM is known to provide transferable model parameters rather than extracting out-of-the-box features.
In Sec. 3.1, the authors analyze different augmentation methods, however, the setup is not fair. MIM is employed for pretraining while the others (e.g., CutMix) are for fine-tuning.
There are many components in the introduced method, including the mean RGB replacement, the intermediate mask, the frequency reconstruction targets and the HOG targets. How much does each component affect?
Clarity, Quality, Novelty And Reproducibility
Clarity: The paper is well written, however, I do not agree with several claims (see Weaknesses).
Quality: Comprehensive experiments are conducted, however, the premises should be verified, i.e., do we need MIM on CNNs?
Novelty: Applying MIM to CNNs is new.
Reproducibility: Implementation details are provided. |
ICLR | Title
Unified neural representation model for physical and conceptual spaces
Abstract
The spatial processing system of the brain uses grid-like neural representations (grid cells) for supporting vector-based navigation. Experiments also suggest that neural representations for concepts (concept cells) exist in the human brain, and conceptual inference relies on navigation in conceptual spaces. We propose a unified model called “disentangled successor information (DSI)” that explains neural representations for physical space and linguistic concepts. DSI generates grid-like representations in a 2-dimensional space that highly resemble those observed in the brain. Moreover, the same model creates concept-specific representations from linguistic inputs, corresponding to concept cells. Mathematically, DSI vectors approximate value functions for navigation and word vectors obtained by word embedding methods, thus enabling both spatial navigation and conceptual inference based on vector-based calculation. Our results suggest that representations for space and concepts can emerge from a shared mechanism in the human brain.
1 INTRODUCTION
In the brain, grid cells in the entorhinal cortex (EC) represent the space by grid-like representations (Hafting et al., 2005; Doeller et al., 2010; Jacobs et al., 2013). This neural representation is often related to vector-based spatial navigation because grid cells provide global metric over the space. Theoretically, an animal can estimate the direction to a goal when representations of a current position and a goal position are given (Fiete et al., 2008; Bush et al., 2015). Furthermore, self-position can be estimated by integrating self-motions when sensory information is not available (McNaughton et al., 2006). These functions are the basis of robust spatial navigation by animals.
There are not only spatial but also conceptual representations in EC. Neurons called as “concept cells” have been found in human medial temporal lobe including EC (Quiroga, 2012; Reber et al., 2019). Concept cells respond to specific concepts, namely, stimuli related to a specific person, a famous place, or a specific category like “foods” and “clothes”. Furthermore, recent experiments also suggest that grid-like representations appear not only for physical space but also for conceptual space if there is a 2-dimensional structure (e.g. lengths of a neck and legs, intensity of two odors), and those representations are the basis of vector-based conceptual inference (Bao et al., 2019; Constantinescu et al., 2016; Park et al., 2021). Thus, it is expected that there is a shared processing mechanism for physical and conceptual spaces in EC. Existence of shared neural mechanism may also explain why humans use sense of physical space (such as directionality) to communicate abstract concepts (conceptual metaphor (Lakoff & Johnson, 1980)). However, a principle behind such universal computation in the brain is still unclear.
In this paper, we propose a representation model which we call disentangled successor information (DSI) model. DSI is an extension of successor representation (SR), which stems from a theory of reinforcement learning and became one of promising computational models of the hippocampus and EC (Dayan, 1993; Stachenfeld et al., 2017; Momennejad et al., 2017; Momennejad, 2020). Like eigenvectors of SR, DSI forms grid-like codes in a 2-D space, and those representations support vector-based spatial navigation because DSI approximates value functions for navigation in the framework of linear reinforcement learning (Todorov, 2006; 2009; Piray & Daw, 2021). Remarkably, when we apply DSI to text data by regarding a sequence of words as a sequence of states, DSI forms concept-specific representations like concept cells. Furthermore, we show mathematical correspondence between DSI and word embedding models in natural language processing (NLP)
(Mikolov et al., 2013a;b; Pennington et al., 2014; Levy & Goldberg, 2014), thus we can perform intuitive vector-based conceptual inference as in those models. Our model reveals a new theoretical relationship between spatial and linguistic representation learning, and suggests a hypothesis that there is a shared computational principle behind grid-like and concept-specific representations in the hippocampal system.
2 CONTRIBUTIONS AND RELATED WORKS
We summarize contributions of this work as follows. (1) We extended SR to successor information (SI), by which we theoretically connected reinforcement learning and word embedding, thus spatial navigation and conceptual inference. (2) We found that dimension reduction with constraints for grid-like representations (decorrelative NMF) generates disentangled word vectors with conceptspecific units, which has not been found previously. (3) Combining these results, we demonstrated that a computational model for grid cells can be extended to represent and compute linguistic concepts in an intuitive and biologically plausible manner, which has not been shown in previous studies.
Our model is an extension of successor representation (SR), which is recently viewed as a plausible model of hippocampus and EC (Dayan, 1993; Stachenfeld et al., 2017; Momennejad et al., 2017; Momennejad, 2020). Furthermore, default representation (DR), which is based on linear reinforcement learning theory, has been also proposed as a model of EC (Piray & Daw, 2021). We show that our model can extract linguistic concepts, which has not been shown for SR and DR. Furthermore, we demonstrate vector-based compositionality of words in our model, which expands the range of compositionality of EC representations (Piray & Daw, 2021) to semantic processing.
Our model produces biologically plausible grid-like representations in 2-D space, which supports spatial navigation. Previous studies have revealed that non-negative and orthogonal constraints are important to obtain realistic grid-like representations (Dordek et al., 2016; Sorscher et al., 2019). Furthermore, recurrent neural networks form grid-like representations through learning path integration, and those representations support efficient spatial navigation (Banino et al., 2018; Cueva & Wei, 2018; Gao et al., 2019). Some of those models have reproduced experimentally observed scaling ratios between grid cell modules (Banino et al., 2018; Sorscher et al., 2019). However, previous models have not been applied to learning of linguistic concepts, or other complex conceptual spaces in real-world data. Whittington et al. (2020) proposed a unified model for spatial and nonspatial cognition. However, their model was applied only to simple graph structures and conceptual specificity like our model was not observed.
Analogical inference by our model is a same function as word embedding methods in NLP (Mikolov et al., 2013a;b; Pennington et al., 2014; Levy & Goldberg, 2014). However, a unique feature of DSI representations is that each dimension of vectors corresponds to a specific concept like concept cells in the human brain (Quiroga, 2012; Reber et al., 2019). Our model provides biological plausible interpretation of word embedding: each word is represented by combination of disentangled conceptual units, inference is recombination of those concepts, and such representations emerge through the same constraints with grid cells. It was recently shown that transformer-based models (Vaswani et al., 2017; Brown et al., 2020), which are currently state-of-the-art models in NLP, generate grid-like representations when applied to spatial learning (Whittington et al., 2022). Similarly to our model, this finding implies the relationship between spatial and linguistic processing in the brain. However, concept-specific representations has not been found in such model. Furthermore, clear theoretical interpretation in this study depends on the analytical solution for skip-gram (Levy & Goldberg, 2014). Such analytical solution is currently unknown for transformer-based models.
3 MODEL
3.1 DISENTANGLED SUCCESSOR INFORMATION
Let us assume Ns discrete states exist in the environment. Successor representation (SR) between two states s and s′ is defined as
SR(s, s′) = E [ ∞∑ t=0 γtδ(st, s ′)|s0 = s ] = ∞∑ t=0 γtP (st = s ′|s0 = s), (1)
where δ(i, j) is Kronecker’s delta and γ is a discount factor. We describe how we calculate SR in this study in Appendix A.1. SR and its dimension-reduced representations have been viewed as models of hippocampus and entorhinal cortex, respectively (Stachenfeld et al., 2017).
Based on SR, we define successor information (SI) and positive successor information (PSI) as
SI(s, s′) = log(SR(s, s′))− log(P (s′)), (2) PSI(s, s′) = max{SI(s, s′), 0}. (3)
In this study, we regard this quantity as a hippocampal model instead of SR (Figure 1A).
Next, we introduce a novel dimension reduction method which we call decorrelative non-negative matrix factorization (decorrelative NMF). Decorrelative NMF can be regarded as a variant of NMF (Lee & Seung, 1999) with additional constraints of decorrelation. By applying decorrelatve NMF to PSI, we obtain representation vectors called as disentangled successor information (DSI), which we regard as a model of EC (Figure 1A). In decorrelative NMF, we obtain D-dimensional vectors x(s) and w(s) (D < Ns) by minimization of the following objective function
J = 1
2 ∑ s,s′ ρ(s, s′)(PSI(s, s′)− x(s) ·w(s′))2
+ 1
2 βcor ∑ i ̸=j (Corr(i, j))2 + 1 2 βreg ∑ s (||x(s)||2 + ||w(s)||2), (4)
subject to non-negative constraints ∀i, xi(s) ≥ 0, wi(s) ≥ 0. ρ(s, s′) is a weight for the square error
ρ(s, s′) = 1
NsV
( 1
M PSI(s, s′) + ρmin
) , (5)
where M and V are mean and variance of PSI, respectively, and ρmin is a small value to avoid zero-weight. Corr(i, j) is a correlation between two dimensions in x(s)
Corr(i, j) = ∑ s x̃i(s)x̃j(s)√∑
s(x̃i(s)) 2 ∑ s(x̃j(s)) 2 , (6)
where x̃i(s) = xi(s) − 1Ns ∑
s xi(s). The first term of the objective function is weighted approximation error minimization, the second term works for decorrelation between dimensions, and the third term regularizes representation vectors. Optimization was performed by Nesterov’s accelerated gradient descent method (Nesterov, 1983) with rectification of xi(s), wi(s) every iteration. We describe additional details in Appendix A.2.
3.2 RELATIONSHIPS WITH REINFORCEMENT LEARNING AND WORD EMBEDDING
We show dual interpretation of our model. On the one hand, DSI approximates value estimation of linear reinforcement learning, thus support goal-directed decision making and navigation. On the
other hand, the same representation approximates word embedding in NLP, thus support semantic computation.
First, our model approximates value functions of linear reinforcement learning (Todorov, 2006; 2009; Piray & Daw, 2021) in the setting of spatial navigation. Linear reinforcement learning assumes default policy and imposes additional penalty on deviation from default policy, then we can obtain value functions explicitly by solving linear equations. Let us consider a specific condition in which the environment consists of non-terminal states, and a virtual terminal state is attached to a goal state sG arbitrarily chosen from non-terminal states (Figure 1B). When the agent gets to the goal, it transits to the terminal state with a probability pNT . Furthermore, we assume that reward at non-terminal states are uniformly negative and reward at the terminal state is positive so that the agent has to take a short path to goal to maximize reward. In this setting, we can obtain value functions v∗(s) in linear reinforcement learning as
λ−1v∗(s) = log(SRd(s, sG))− logP d(sG) = SId(s, sG) ≈ x(s) ·w(sG), (7)
where SRd(s, sG) and SId(s, sG) are SR and SI under the default policy, respectively. We describe details of derivation in Appendix A.3. Therefore, SI is proportional to value functions for spatial navigation and inner products of DSI vectors approximates value functions. Based on this interpretation, we basically regard x(s) as a representation of each state, and w(s) represents a temporary goal.
Second, DSI is related to word embedding methods in NLP (Mikolov et al., 2013a;b; Pennington et al., 2014; Levy & Goldberg, 2014). In linguistics, pointwise mutual information (PMI) and positive pointwise mutual information (PPMI) are used to measure the degree of coincidence between two words (Levy & Goldberg, 2014). They are defined as
PMI = log
( P (wordi, wordj)
P (wordi)P (wordj)
) , (8)
PPMI = max{PMI, 0}, (9)
where P (wordi, wordj) is a coincidence probability of two words (in a certain temporal window). It has been proven that dimension reduction of PMI approximates a word embedding method skipgram (Mikolov et al., 2013a;b), and similar performance is obtained using PPMI (Levy & Goldberg, 2014). GloVe (Pennington et al., 2014) is also based on this perspective. SI can be written as
SI(s, s′) = log(SR(s, s′))− log(P (s′)) = log (∑∞ t=0 γ tP (st = s ′, s0 = s)
P (s)P (s′)
) . (10)
In this formulation, we can see mathematical similarity between PMI and SI by regarding words as states (s = wordi, s′ = wordj), thus the correspondence between PPMI and PSI. Because of this relationship, we can expect that DSI, which is obtained through dimension reduction of PSI, has similar properties to word embedding methods. The difference is how to count coincidence: the coincidence in SI is evaluated with an asymmetric exponential kernel as in SR, in contrast that a symmetric rectangular temporal window is often used in typical word embedding (see Appendix A.4 for further detail).
3.3 DECORRELATIVE NMF RELATES TO GRID CELLS AND DISENTANGLEMENT
Constraints in decorrelative NMF (non-negativity, decorrelation (or orthogonality), and regularization) are important for generation of grid cells, as shown in previous theoretical studies on grid cells. (Dordek et al., 2016; Cueva & Wei, 2018; Banino et al., 2018; Gao et al., 2019; Sorscher et al., 2019). They are also biologically plausible because neural activity is basically non-negative and decorrelation is possible through lateral inhibition. On the other hand, non-negativity (Oja & Plumbley, 2004) and decorrelation (Hyvärinen & Oja, 2000) are also important for extraction of independent components, and it is known that imposing independence in latent space of deep generative models results in the emergence of disentangled representations for visual features (Higgins et al., 2017). Therefore, in word embedding, we expected that those constraints help emergence of independent and disentangled units for linguistic concepts. Constraints in decorrelative NMF are actually crucial for results obtained in this study (Appendix A.5).
As disentangled visual representations explain single-cell activities in higher-order visual cortex (Higgins et al., 2021), we may similarly interpret conceptual representations in our model as concept cells in the human medial temporal lobe (Quiroga, 2012). Previous studies suggest that each concept cell respond to a specific concept, whereas population-level activity patterns represent abstract semantic structures (Reber et al., 2019). Such property is consistent with the factorized and distributed nature of disentangled representation vectors.
4 LEARNING REPRESENTATIONS OF PHYSICAL SPACES
In this section, we empirically show that DSI model forms biologically plausible grid-like representations in a 2-D physical space, and they support spatial navigation. These results also apply to conceptual spaces with the 2-D structure, depending on the definition of states.
4.1 LEARNING PROCEDURE
As an environment, we assumed a square room tiled with 30×30 discrete states (Figure 2A). In each simulation trial, an agent starts at one of those 900 states and transits to one of eight surrounding states each time except that transitions are limited at states along the boundary (the structure was not a torus). Transitions to all directions occur with an equal probability. We performed 500 simulation trials and obtained a sequence of 100,000 time steps in each trial. We calculated occurrence probabilities (P (s)) and a successor representation matrix (SR(s, s′)) of 900 states from those sequences, and calculated PSI and DSI (100-dimensional) as described in the Model section. The discount factor γ was set to 0.99. We additionally tested spatial navigation in a structure with separated and interconnected rooms (see Figure 3C). In that case, we used the discount factor γ = 0.999.
4.2 EMERGENCE OF GRID-LIKE REPRESENTATIONS
Here we call each dimension of DSI representation vectors x(s) as a neural ”unit”, and we regard a value in each dimension at each state as a neural activity (or a neural representation). As shown in Figure 2B, many units exhibited grid-like activity patterns in the space. We performed a gridness analysis that has been used in animal experiments (Sargolini et al., 2006) and found that 51% of units were classified as grid cells. Similarly, 53% of units in w(s) were classified as grid cells.
Furthermore, we checked whether DSI representations in the physical space reproduce a property of biological grid cells. Actual grid cells in the rat brain exhibit multiple discrete spatial scales and the ratio between grid scales of adjacent modules is √ 2 (Stensola et al., 2012). We constructed a distribution of grid scales of DSI units by kernel density estimation, which revealed that multiple peaks of grid scales existed and the ratio between grid scales of adjacent peaks was √ 2 (Figure 2C). These results show that DSI model constructs biologically plausible grid-like representations in the 2-D physical space. We describe details of analysis methods in Appendix A.6.
4.3 NEAR-OPTIMAL SPATIAL NAVIGATION BY DSI VECTORS
As discussed in Section 3.2, the inner product of DSI representations approximate value functions for spatial navigation. Therefore, we tested whether DSI representations actually enable nearoptimal navigation in the space.
We assume that a start location (state sinit) and a goal location (state sG) are randomly given in each trial such that the shortest path length is minimally 10, and an agent has to navigate between them. To solve the task, we define a vector-based state transition rule. Suppose that the agent exists at a state s, and a set of neighboring states of s is A(s). Given the goal representation vector w(sG), a value function of a neighboring state snext ∈ A(s) is estimated by x(snext) · w(sG), and the agents transits to the state that has a maximum value. This state transition rule can be geometrically interpreted as the choice of movements that has the closest angle with the goal vector in the representation space (Figure 3A). Otherwise, we can interpret that the agent estimates value functions by linear readout from grid-like DSI representations. Because of the approximation error, this rule did not always give optimal navigation (the shortest path from the start to the goal). However, the agent could take the shortest path to the goal in 93.9% of 1,000 trials we tested (an example is shown in
(
√
2)n
, respectively.
Figure 3B). Furthermore, 97.2% were near-optimal navigation in which the actual path length was shorter than 1.1 times the shortest path length. The same framework also worked in a relatively complex environment with separated rooms (Figure 3C). In this environment, the ratio of optimal and near-optimal navigation was 68% and 82.6%, respectively. We also confirmed that we can perform path integration based on DSI representations using movement-conditional recurrent weights (McNaughton et al., 2006; Burak & Fiete, 2009; Oh et al., 2015; Gao et al., 2019) (Appendix A.7). These results show that DSI representations can support spatial navigation, which corresponds to the contribution of biological grid cells for spatial navigation.
4.4 VECTOR-BASED INFERENCE OF SPATIAL CONTEXTS
We additionally found that we can perform vector-based inference for spatial navigation in a novel context. First, we constructed DSI representation vectors in spatial contexts A and B, each of which has a barrier (Figure 4A). Then, we created representation vectors for a novel context A+B with two barriers by simply adding representation vectors for familiar contexts A and B (Figure 4A). We tested vector-based navigation (described in the section 4.3) in three spatial contexts A, B, and A+B, using one of three representations for A, B, and A+B. Naturally, representation vectors for A and B gave the best performance in contexts A and B, respectively (Figure 4B). Notably, composite representation vectors for A+B achieved the best performance in the context A+B (Figure 4B). This
result suggests that we can utilize vector-based composition of representations for a novel spatial context. We describe details of the simulation in Appendix A.8.
Additional analysis by multidimensional scaling (MDS) suggest that summing DSI vectors leads to composition of an appropriate metric space for the novel context (Appendix A.9). This is potentially useful for composing multiple constraints that change reachability between states in various tasks (such as control of robotic arms and playing computer games), like composition of tasks in soft-Q learning (Haarnoja et al., 2018; Makino).
5 LEARNING REPRESENTATIONS OF CONCEPTUAL SPACES
In this section, we show that the same DSI model can learn representations for a complex conceptual space from linguistic inputs, and those representations support vector-based conceptual inference.
5.1 LEARNING PROCEDURE
We used text data taken from English Wikipedia, which contains 124M tokens and 9376 words (see Appendix A.10 for the detail of preprocessing). To construct DSI representations, we regarded each word as a “state”, and considered the text data as a sequence of 9376 states (Ns = 9376). Then, we applied the exactly same learning procedure as in the experiment of physical spaces. We obtained 300-dimensional DSI representation vectors for each word. The discount factor γ was set to 0.9. The setting of other parameters was the same as the experiment of physical spaces.
5.2 EMERGENCE OF CONCEPT-SPECIFIC REPRESENTATIONS
As in the previous section, we regard each dimension of representation vectors as a neural unit, and checked how various words activate those units. Specifically, we listed ten words that elicited the highest activities in each unit (TOP-10 words). Consequently, we found that many units are activated by words related to specific concepts (Figure 5; other examples in Appendix A.12), which could be named as “game cell” or “president cell”, for example. We quantified this conceptual specificity through WordNet-based semantic similarity between words (Princeton University, 2010). We compared mean similarity among TOP-10 words and a null distribution of similarity between random word pairs, by which we determined statistically significant concept-specific units and quantified the degree of conceptual specificity of each unit (see Appendix A.11 for details). DSI exhibited the larger number of significantly concept-specific units and higher average conceptual specificity than other well-established word embedding methods such as skip-gram and GloVe (Table 1) (Mikolov et al., 2013a;b; Pennington et al., 2014; Levy & Goldberg, 2014). We also analyzed conceptual specificity of representations in the embedding layer of pretrained BERT model (bert-base-uncased in Hugging Face transformers) (Devlin et al., 2018; Wolf et al., 2020), which was lower than DSI (Table1). This result shows that our DSI model forms more concept-specific representations than other models.
Additional analyses revealed that word representation vectors are non-sparse and distributed (Appendix A.15). Therefore, each word is represented by the combination of concept-specific units
METHOD EVALUATED SIGNIFICANT RATIO SPECIFICITY
shared by several related words. For example, ”France” can be represented by the combination of units which we could name French cell and country cell (Appendix A.15).
5.3 VECTOR-BASED COMPUTATION IN THE CONCEPTUAL SPACE
Given that DSI and word embedding methods are mathematically similar (Section A.4), we expect that DSI vectors have similar properties to representation vectors learned by those word embedding methods. We evaluated the performance of DSI vectors in two tasks that have been used to evaluate word embedding methods: word similarity and analogical inference (Mikolov et al., 2013a;b; Pennington et al., 2014; Levy & Goldberg, 2014). In the word similarity task, we calculated cosine similarity between representation vectors of word pairs, and evaluated the rank correlation between those cosine similarities and human word similarities (WS353 dataset (Agirre et al., 2009); 248/345 word pairs were used). In the analogical inference task, we performed calculation of vectors such as x(king) − x(man) + x(woman) and checked whether the resultant vector has the maximum cosine similarity with x(queen) (Mikolov’s dataset (Mikolov et al., 2013a;b); 3157/19544 questions were used; examples in Appendix A.13). The result shows that DSI vectors achieved comparable performance with other well-established word embedding methods (Table 2).
This result indicates that similarity of DSI representation vectors corresponds to semantic similarity. This property is consistent with the experimental observation that population-level pattern similarity of concept cell activities represents semantic categories (Reber et al., 2019). By visualizing the structure of DSI representations by MDS, we can actually see clustering of words corresponding to 10 semantic categories used in Reber et al. (2019) (Appendix A.14). Furthermore, conceptual inference is possible through arithmetic composition of DSI vectors.We additionally found that this inference is intuitive recombination of concept-specific units in some cases. For example, transformation from ”Paris” to ”France” corresponds to activation of country cell and deactivation of capital cell, which is possible by summing the difference of Germany and Berlin vectors. (Appendix A.15).
METHOD SIMILARITY ANALOGY
6 DISCUSSION
In this paper, we proposed a theoretically interpretable and biologically plausible neural representation model for physical and conceptual spaces. We demonstrated that our DSI model forms grid-like representations in the physical space and concept-specific representations in the linguistic space, which are assumed to correspond to neural representations in EC. Furthermore, we showed that SI is mathematically related to linear reinforcement learning and word embedding methods, thus DSI representations support spatial navigation and conceptual inference. These results suggest that we can extend the spatial representation model of EC to learn and compute linguistic concepts, which apparently seems a different computational domain from physical space.
In the section 5.2, we demonstrated concept-specific representations created from text data. To the best of our knowledge, such property has not been reported in any word embedding methods. However, we unexpectedly found that continuous-bag-of-words (CBOW) showed relatively high conceptual specificity. Although DSI is related to PMI, skip-gram and GloVe, we have not found relationship to CBOW. Further clarification of necessary conditions for conceptual specificity is still open problem.
Although DSI has clear mathematical interpretation, how biological neural networks can learn DSI is still unclear. A possible solution is an extension of skip-gram neural network with SR, nonnegativity, and decorrelation. Because SI corresponds to PMI which is the optimum of skip-gram neural network, we can expect that extended skip-gram network learns DSI. Building such model and relating it to the circuit mechanism in hippocampus and EC are left for future research.
Our model relates word embedding to conceptual representations in the brain. Previously study showed that skip-gram representations support high-performance decoding of semantic information from fMRI data (Nishida & Nishimoto, 2018). Another study revealed that hippocampal theta oscillation codes semantic distances between words measured in word2vec subspace (Solomon et al., 2019). These experimental results support our hypothesis. However, recent studies have shown that representations in transformer-based models (Vaswani et al., 2017) such as GPT (Brown et al., 2020) achieve remarkable performance in linear fitting to neural recording during linguistic processing (Goldstein et al., 2022; Schrimpf et al., 2021). A major difference between our DSI model and transformer-based models is that DSI representations are basically fixed (static embedding) whereas transformer-based models flexibly create context-dependent representations (dynamic embedding). Conceptual interpretation obviously depends on the context, thus activities of concept cells are context-dependent (Bausch et al., 2021). Therefore, our DSI model should be extended to process context-dependence, hopefully by combination with other models for learning contextdependent latent cognitive states (Uria et al., 2020; George et al., 2021; Whittington et al., 2020).
Another direction of future research is application to general conceptual spaces by learning DSI representations from low-level sensory inputs, like spatial learning from visual and auditory inputs in previous models (Banino et al., 2018; Taniguchi et al., 2018; Uria et al., 2020). It may be possible by learning discrete states by unsupervised clustering for deep networks (Caron et al., 2018). As for the human brain, infants probably form primitive spatial and conceptual representations from sensory signals, and later linguistic inputs enrich those representations. We speculate that real-world sensory data also contain the information of the conceptual space, for which DSI can be extended to learn those structures. Such model would clarify the role of the hippocampal system in computation of general conceptual spaces.
A APPENDIX
A.1 CALCULATION OF SR
SR is a variant of value functions in reinforcement learning, thus we can use various methods such as temporal-difference (TD) learning for the construction. Throughout this study, we used a direct count method because we performed only offline processing of finite data. In a sequence of states {s1, . . . , st, . . . , sT }, we recursively calculated exponential traces of past states z(s, t) = ∑t−1 τ=0 γ τδ(st−τ , s) as
z(s, t) = γz(s, t− 1) + δ(st, s), (11)
and calculated SR from state counts and coincidence counts as
SR(s, s′) =
∑T t=1 z(s, t)δ(st, s
′)∑T t=1 δ(st, s) . (12)
A.2 DETAILS OF DECORRELATIVE NMF
In decorrelative NMF, we iteratively updated vectors x(s) and w(s) by Nesterov’s accelerated gradient descent method to minimize the objective function (Eq. 4), rectifying all elements every iteration. Gradients are
∂J
∂xk(s) = − ∑ s′ ρ(s, s′)(PSI(s, s′)− x(s) ·w(s′))wk(s′)
+ βcor ∑ j ̸=k Corr(k, j)x̃j(s)√∑ s(x̃k(s)) 2 ∑ s(x̃j(s)) 2 + βregxk(s), (13)
∂J
∂wk(s′) = − ∑ s PSI(s, s′)(PSI(s, s′)− x(s) ·w(s′))xk(s) + βregwk(s′). (14)
We note that we regarded mean and variance of xi(s) in the correlation ( 1Ns ∑ s xi(s), ∑ s(x̃i(s)) 2 in Eq. 6) as constants in the calculation of these gradients. Practically, this heuristic did not affect the performance of decorrelation.
Throughout this paper, the learning rate was 0.05 and the number of iteration was 10000. Parameters were βcor = 1, βreg = 0.001, and ρmin = 0.001.
A.3 MATHEMATICAL RELATIONSHIP OF DSI AND REINFORCEMENT LEARNING
In this section, we show that our model approximates value functions of linear reinforcement learning (Todorov, 2006; 2009; Piray & Daw, 2021) in the setting of spatial navigation.
In linear reinforcement learning, an agent aims to maximize “gain” instead of reward. Assuming a default policy πd(s) (any policy is available; typically random walk in the case of exploration task), gain function is defined as
g(s) = r(s)− λKL(π(s)|πd(s)), (15)
where r(s) is expected reward at the state s and λKL(π(s)|πd(s)) is the cost imposed on the difference between the current policy π(s) and the default policy πd(s) (λ is a relative weight of the cost). Then, previous works have shown that the optimal policy and corresponding value functions can be determined explicitly by solving linear equations (Todorov, 2006; 2009; Piray & Daw, 2021). Here we consider an environment that consists of NN non-terminal states and NT terminal states. We define two transition probability matrices under the default policy: PNT is a NN × NT matrix for
transitions from non-terminal states to terminal states, and PNN is a NN×NN matrix for transitions across non-terminal states. Furthermore, rN and rT are vectors of rewards at non-terminal states and terminal states, respectively. In this condition, a vector of value functions under optimal policy v∗ = (v∗(s1), . . . , v ∗(sNN )) is obtained as
exp(λ−1v∗) = MPNT exp(λ −1rT ), (16)
where M = (diag(exp(−λ−1rN ))− PNN )−1 is DR (Piray & Daw, 2021). To relate v∗ to SI, we consider a specific condition in which the environment consists of non-terminal states, and a virtual terminal state is attached to a goal state sG arbitrarily chosen from non-terminal states (Figure 1B). When the agent gets to the goal, it transits to the terminal state with a probability pNT . Furthermore, we assume that reward at non-terminal states are uniformly negative and reward at the terminal state is positive so that the agent has to take a short path to goal to maximize reward. Specifically, we assume all elements of rN are λ log γ, and rT = −λ(log γ+log pNT +logP d(sG)) where γ is an arbitrary value in the range (0, 1), and P d(sG) is a probability of visiting the state sG under the default policy. Then, we obtain
exp(λ−1v∗) = 1
P d(sG) (I − γPNN )−1e(iG), (17)
where e(iG) = (0, . . . , 0, 1, 0, . . . , 0)T (iG is the index of the goal state). Because (I−γPNN )−1 is equivalent to a successor representation matrix with a discount factor γ (Dayan, 1993; Stachenfeld et al., 2017), we finally obtain
λ−1v∗(s) = log(SRd(s, sG))− logP d(sG) = SId(s, sG) ≈ x(s) ·w(sG), (18) where SRd(s, sG) and SId(s, sG) are SR and SI under the default policy, respectively. Thus, SI is proportional to value functions for spatial navigation and inner products of DSI vectors approximates value functions. Based on this interpretation, we basically regard x(s) as a representation of each state, and w(s) represents a temporary goal.
A.4 MATHEMATICAL RELATIONSHIP OF DSI AND WORD EMBEDDING
In this section, we discuss the relationship of SI and PMI (Levy & Goldberg, 2014) in detail. PMI is
PMI = log
( P (wordi, wordj)
P (wordi)P (wordj)
) , (19)
where P (wordi, wordj) is a coincidence probability of two words (in a certain temporal window).
To relate PMI to SI, we regard words as states: s = wordi, s′ = wordj . Furthermore, we consider a specific way to count coincidence probability. In typical word embedding, a finite symmetric rectangular window is often used:
P (s, s′) = W∑ t=0 P (st = s ′, s0 = s), (20)
where W is a window size. Here, we implicitly assumed that same state (word) is not repeated in the temporal window to guarantee that P(s, s’) is probability.
However, we may arbitrarily calculate coincidence for P (s, s′). Here we evaluate coincidence with an infinite asymmetric exponential kernel as in SR:
P (s, s′) = (1− γ) ∞∑ t=0 γtP (st = s ′, s0 = s). (21)
We introduced a normalization factor (1 − γ) to guarantee that P (s, s′) is less than one ((1 − γ) ∑∞ t=0 γ t = 1). Then, PMI becomes
PMI = log
( (1− γ) ∑∞ t=0 γ tP (st = s ′, s0 = s)
P (s)P (s′)
) (22)
= log(SR(s, s′))− log(P (s′)) + log(1− γ) (23) = SI(s, s′) + log(1− γ). (24)
If we perform dimension reduction, log(1 − γ) can be ignored because it is a constant. Therefore, we can interpret SI as a special case of PMI in our model.
A.5 RELATIONSHIP BETWEEN MODEL COMPONENTS AND REPRESENTATIONS
To clarify the contribution of each model component to the results in this study, we performed a “lesion study” in which we removed some components in DSI and repeated the same evaluation procedure in the main text. We summarize results in Table 3. First, we tested representations obtained by singular value decomposition of successor representation (SR-SVD), which was regarded as a model of grid cells in a previous study (Stachenfeld et al., 2017). DSI model exceeded SR-SVD in all aspects shown in this study. Next, we tested DSI model without decorrelation (βcor = 0) and DSI model without non-negativity (no rectification of representation vectors). Neither modification impaired the performance of navigation and inference, showing the importance of using SI for vector-based computations as theoretically expected. In contrast, removing decorrelation and non-negativity significantly impaired the emergence of grid-like units and concept-specific units, respectively. Thus, decorrelative NMF is crucial to obtain biologically plausible representations.
A.6 DETAILS OF EVALUATION OF GRID REPRESENTATIONS
In the section 4.2, we performed the gridness analysis following a previous experimental study (Sargolini et al., 2006). For each unit, we rotated the spatial autocorrelation map (Figure 2B, lower) and calculated correlations between the original and rotated maps. Gridness was defined as the difference between the lowest correlation at 60◦ and 120◦ and the highest correlation at 30◦, 90◦ and 150◦. A unit was classified as a grid cell when gridness exceeds zero.
In Figure 2C, we constructed a distribution of grid scales. Grid scales were determined as the median of distances between the central peak and the six closest peaks (vertices of inner hexagon) in the spatial autocorrelation map. The kernel function for kernel density estimation was Gaussian with a standard deviation 1.
A.7 PATH INTEGRATION BY DSI
We performed path integration based on DSI representations using movement-conditional recurrent weights. This strategy has been used in previous studies such as grid cell modeling (Gao et al., 2019) and action-conditional video prediction (Oh et al., 2015). This mechanism is also consistent with a conventional biological model for path integration in which head direction signals activate one of attractor networks specialized for different directional shifts of grid patterns (McNaughton et al., 2006; Burak & Fiete, 2009).
We made an estimate of the next representation vector x̂t+1 by linear transformation of the current representation vector x(st) as
x̂t+1 = M(at)x(st) (25)
where at represents a movement (one of eight directional movements in this study) and M(at) is movement-conditional recurrent weight matrix. Here, x(st) was a DSI representation vector, and we optimized the matrix M(at) by minimizing prediction error ||x(st+1) − M(at)x(st)||22 by stochastic gradient descent during random walk on the state transition graph (20 simulation trials of 100,000 time steps). After optimization, we set an initial state s0 and a sequence of movements {a0, a1, . . . , aT−1}, and performed path integration by recursive estimation x̂t+1 = M(at)x̂t. We determined a position at each time step by searching a state representation vector that has minimum Euclidian distance with the estimated vector (st = argmins ||x(s)− x̂t||2). As shown in Figure 6, this strategy gave accurate estimation of the spatial path from movement signals.
A.8 DETAILS OF VECTOR-BASED INFERENCE OF THE SPATIAL CONTEXT
In the section 4.4, we performed vector-based inference for spatial navigation in a novel context. Specifically, we define separated states in contexts A, B, A+B as sAi , s B i and s A+B i , where i is a positional index which indicates a same position in all contexts (i = 1, 2, · · · , 900). We constructed representation vectors x(sAi ), w(s A i ), x(s B i ), and w(s B i ) through direct experiences, then we created x(sA+Bi ) and w(s A+B i ) as
x(sA+Bi ) = x(s A i ) + x(s B i ), (26)
w(sA+Bi ) = w(s A i ) +w(s B i ). (27)
We performed spatial navigation in a given context using one of three representations {x(sAi ),w(sAi )}, {x(sBi ),w(sBi )}, and {x(s A+B i ),w(s A+B i )} for corresponding positions, following the rule described in the section 4.3. In figure 7, we show structures of state transition graphs for three contexts A, B, and A+B.
To learn representations in contexts A and B, we sampled sequences of {sAi }i=1,··· ,900 and {sBi }i=1,··· ,900 by random walk in context A and B. The procedure was basically same with the section 4 except that we increased the number of simulation trials from 500 to 1,000, and state transition to the same position in the other context occurred every 5,000 time steps (transition between sAi and s B i ). We added this transition to associate the same position in different contexts. It means that we assumed that the setting of barriers can change during the experience but this temporal association may be substituted by similarity of sensory inputs across contexts. From sampled sequences, we calculated PSI for all combinations of {sAi }i=1,··· ,900 and {sBi }i=1,··· ,900, and calculated 100- dimensional DSI vectors for 1,800 states by simultaneous compression of all states. The discount factor γ was set to 0.999.
A.9 VISUALIZATION OF SPATIAL STRUCTURES REPRESENTED BY DSI VECTORS
In Figure A.9, we visualized metric spaces defined by representation vectors for contexts A and B, and composite vectors for the context A+B, by using multidimensional scaling (MDS). This visualization clearly shows that DSI vectors for A and B capture structures of spatial contexts A and B, and adding those vectors yields appropriate metric space for novel context A+B.
A.10 DETAILS OF PREPROCESSING OF TEXT DATA
In the section 5, we used text data taken from English Wikipedia dump (enwiki-latest-pagesarticles, 22-May-2020). We first generated text files from raw data using wikiextrator (https: //github.com/attardi/wikiextractor). We tokenized texts by nltk Punkt sentense tokenizer, and randomly sampled 100,000 articles containing 1,000 tokens at minimum. We lowercased all characters and removed punctuation characters in the data. After that, we selected words that appeared more than 1,000 times in the data, and substituted all other rare words by <unk> symbol. Finally, we obtained data that contains 124M tokens and 9376 words.
A.11 DETAILS OF EVALUATION OF CONCEPTUAL SPECIFICITY
In the section 5.2, conceptual specificity of each unit was evaluated using WordNet database (Princeton University, 2010). In WordNet, a word belongs to several synsets (sets of cognitive synonyms), and semantic similarity of two synsets can be evaluated from the shortest path length between them in the WordNet structure (we used path similarity function in nltk library). We defined similarity of two words as the highest similarity among all combinations of synsets of those words. We calculated mean similarity of all combinations of TOP-10 words (ten words that highly activated the unit; Figure 5A) that are available in WordNet. We evaluated only units which had at least five TOP-10 words available in WordNet. Furthermore, we randomly generated 1,000 pairs of words available in WordNet, and generated a null distribution of similarity between words. We defined a significance threshold of similarity as a 95 percentile of the null distribution, and a unit was classified as a significantly concept-specific unit if mean similarity of TOP-10 words exceeded the threshold. Furthermore, we quantitatively defined a conceptual specificity of each unit as
sunit snull − 1, (28)
where sunit is mean similarity of TOP-10 words and snull is the mean of the null distribution. This quantity becomes zero if similarity between TOP-10 words is not different from random pairs, and becomes positive if TOP-10 words are semantically similar. This conceptual specificity was averaged over all evaluated units.
A.12 EXAMPLE DSI REPRESENTATIONS FOR WORDS
In the figure 9, we show TOP-10 words of DSI units without manual selection. We found several non-significant units exhibit conceptual specificity according to manual inspection (for example, unit4 may be named as university cell). This is probably because of the limitation of knowledge covered by WordNet. Therefore, we suppose that the current evaluation method tends to underestimate the number of concept-specific units. However, the comparison across models was fair because we used the same procedure and criteria for all models.
A.13 EXAMPLES OF THE ANALOGICAL INFERENCE TASK
In table 4, we show some examples of the analogical inference in Mikolov’s dataset. There is a relationship “WORD1 is to WORD2 as WORD3 is to WORD4”. Then, an expected relationship in the vector space is WORD2-WORD1=WORD4-WORD3. In this study, we performed inference of WORD4 by WORD3+WORD2-WORD1. We regarded an inference was correct if the actual vector of WORD4 had the largest cosine similarity to the inferred vector among all word representation vectors (except those for WORD1, WORD2, and WORD3). If the number of words is 10,000, a chance level of the correct answer rate is 0.01%. Therefore, the performance shown in this study (more than 50%) is far above the chance level.
A.14 CLUSTERING OF SEMANTIC CATEGORIES IN DSI SPACE
Figure 10 shows the structure of DSI word representations visualized by MDS. We arbitrarily chose words based on 10 semantic categories used in Reber et al. (2019). We used same dissimilarity metric with Reber et al. (2019) (1 - Pearson’s correlation coefficient).
A.15 INTUITIVE MECHANISM OF WORD REPRESENTATIONS BY DSI
In this section, we discuss how DSI vectors represent and compute words.
First, we analyzed the ratio of each element to the sum of all elements in DSI vectors. We found that even the largest element accounted for 5% of the sum of all elements on average. (Figure 11). This result shows that DSI vectors for words are non-sparse and distributed, thus each word is represented by the combination of multiple conceptual units.
Next, for further clarification, we inspected representations of an example set of words: France, Paris, Germany and Berlin. We can see there are two analogical relationships (country-capital and French-German relationships). We identified the most active units (TOP-2) in DSI vectors for those words, and listed TOP-10 words for identified units. As a result, we could see that “France” is represented by the combination of units that we could name as French cell and country cell, whereas
“Berlin” is represented by the combination of German cell and capital cell, and so on (Figure 12). This example also gives a simple interpretation of word similarity in DSI vector space. If words are similar, they share large number of active units, like the country cell shared by representations of France and Germany. Thus, semantic similarity between words increases cosine similarity between word vectors.
Furthermore, we also identified the largest elements (the largest absolute values) in the difference vectors between words, and found that they correspond to semantic difference between words (Figure 12). Thus, we can regard analogical inference by DSI vectors as recombination of conceptual units. For example, adding Germany-Berlin vector to Paris vector deactivate capital cell and activate country cell, which leads to the transformation of Paris into France.
Such property of the vector space is same as conventional word embedding methods, but unique feature of our model is that those analogical relationships are factorized into separated units. We speculate that constraints of decorrelative NMF are sufficient conditions to align each semantic factors to each axis of the word vector space, and the mechanism is probably related to how disentangled representations emerge in visual feature learning model (Higgins et al., 2017; Carbonneau et al., 2020). | 1. What is the focus and contribution of the paper regarding positive successor information in reinforcement learning?
2. What are the strengths and weaknesses of the proposed approach, particularly in its relationship with linear reinforcement learning and its application to spatial navigation and word embedding?
3. Do you have any concerns or questions regarding the use of non-negative matrix factorization and its novelty compared to prior works?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
This paper defines positive successor information in the framework of reinforcement learning, and proposes positive decorrelative nonnegative matrix factorization for dimension reduction of successor information. The paper applies the proposed method to spatial navigation and word embedding, and obtained meaningful results. The paper also analyzes theoretical relationship to linear reinforcement learning.
Strengths And Weaknesses
Strengths:
(1) The theoretical foundation of the paper is solid, especially relationship with linear reinforcement learning.
(2) The empirical results on grid cells are strong.
(3) The experiments on word embedding are solid.
Weaknesses:
(1) Non-negative matrix factorization is widely used, such as in the recommender system, to factorize the value function or the affinity matrix. The successor representation has also be used to model place cells, which are connected to grid cells via matrix factorization or eigen decomposition. Thus this paper is not entirely novel, although the specific form of decorrelative NMF is new.
(2) Given the large number of grid cells in EC, dimension reduction may not be the most convincing point of view for embedding. It is more about population coding, where generalization does not need to rely on dimension reduction. Other forms of regularizations may also work.
(3) Can your method on grid cells explain path integration?
(4) The sequence of words in NLP is not a trajectory. The contexualized embedding such as BERT seems more reasonable.
Clarity, Quality, Novelty And Reproducibility
(1) The paper is very clearly written.
(2) The theoretical foundation of the paper is solid.
(3) The proposed method is new, but is not entirely original, given the past work on successor representation for place cells and past work on word embedding. Matrix factorization is an element in both areas. Word embedding in NLP has gone far beyond matrix factorization. Dimension reduction may not explain grid cells either given the large number of grid cells in EC. |
ICLR | Title
Unified neural representation model for physical and conceptual spaces
Abstract
The spatial processing system of the brain uses grid-like neural representations (grid cells) for supporting vector-based navigation. Experiments also suggest that neural representations for concepts (concept cells) exist in the human brain, and conceptual inference relies on navigation in conceptual spaces. We propose a unified model called “disentangled successor information (DSI)” that explains neural representations for physical space and linguistic concepts. DSI generates grid-like representations in a 2-dimensional space that highly resemble those observed in the brain. Moreover, the same model creates concept-specific representations from linguistic inputs, corresponding to concept cells. Mathematically, DSI vectors approximate value functions for navigation and word vectors obtained by word embedding methods, thus enabling both spatial navigation and conceptual inference based on vector-based calculation. Our results suggest that representations for space and concepts can emerge from a shared mechanism in the human brain.
1 INTRODUCTION
In the brain, grid cells in the entorhinal cortex (EC) represent the space by grid-like representations (Hafting et al., 2005; Doeller et al., 2010; Jacobs et al., 2013). This neural representation is often related to vector-based spatial navigation because grid cells provide global metric over the space. Theoretically, an animal can estimate the direction to a goal when representations of a current position and a goal position are given (Fiete et al., 2008; Bush et al., 2015). Furthermore, self-position can be estimated by integrating self-motions when sensory information is not available (McNaughton et al., 2006). These functions are the basis of robust spatial navigation by animals.
There are not only spatial but also conceptual representations in EC. Neurons called as “concept cells” have been found in human medial temporal lobe including EC (Quiroga, 2012; Reber et al., 2019). Concept cells respond to specific concepts, namely, stimuli related to a specific person, a famous place, or a specific category like “foods” and “clothes”. Furthermore, recent experiments also suggest that grid-like representations appear not only for physical space but also for conceptual space if there is a 2-dimensional structure (e.g. lengths of a neck and legs, intensity of two odors), and those representations are the basis of vector-based conceptual inference (Bao et al., 2019; Constantinescu et al., 2016; Park et al., 2021). Thus, it is expected that there is a shared processing mechanism for physical and conceptual spaces in EC. Existence of shared neural mechanism may also explain why humans use sense of physical space (such as directionality) to communicate abstract concepts (conceptual metaphor (Lakoff & Johnson, 1980)). However, a principle behind such universal computation in the brain is still unclear.
In this paper, we propose a representation model which we call disentangled successor information (DSI) model. DSI is an extension of successor representation (SR), which stems from a theory of reinforcement learning and became one of promising computational models of the hippocampus and EC (Dayan, 1993; Stachenfeld et al., 2017; Momennejad et al., 2017; Momennejad, 2020). Like eigenvectors of SR, DSI forms grid-like codes in a 2-D space, and those representations support vector-based spatial navigation because DSI approximates value functions for navigation in the framework of linear reinforcement learning (Todorov, 2006; 2009; Piray & Daw, 2021). Remarkably, when we apply DSI to text data by regarding a sequence of words as a sequence of states, DSI forms concept-specific representations like concept cells. Furthermore, we show mathematical correspondence between DSI and word embedding models in natural language processing (NLP)
(Mikolov et al., 2013a;b; Pennington et al., 2014; Levy & Goldberg, 2014), thus we can perform intuitive vector-based conceptual inference as in those models. Our model reveals a new theoretical relationship between spatial and linguistic representation learning, and suggests a hypothesis that there is a shared computational principle behind grid-like and concept-specific representations in the hippocampal system.
2 CONTRIBUTIONS AND RELATED WORKS
We summarize contributions of this work as follows. (1) We extended SR to successor information (SI), by which we theoretically connected reinforcement learning and word embedding, thus spatial navigation and conceptual inference. (2) We found that dimension reduction with constraints for grid-like representations (decorrelative NMF) generates disentangled word vectors with conceptspecific units, which has not been found previously. (3) Combining these results, we demonstrated that a computational model for grid cells can be extended to represent and compute linguistic concepts in an intuitive and biologically plausible manner, which has not been shown in previous studies.
Our model is an extension of successor representation (SR), which is recently viewed as a plausible model of hippocampus and EC (Dayan, 1993; Stachenfeld et al., 2017; Momennejad et al., 2017; Momennejad, 2020). Furthermore, default representation (DR), which is based on linear reinforcement learning theory, has been also proposed as a model of EC (Piray & Daw, 2021). We show that our model can extract linguistic concepts, which has not been shown for SR and DR. Furthermore, we demonstrate vector-based compositionality of words in our model, which expands the range of compositionality of EC representations (Piray & Daw, 2021) to semantic processing.
Our model produces biologically plausible grid-like representations in 2-D space, which supports spatial navigation. Previous studies have revealed that non-negative and orthogonal constraints are important to obtain realistic grid-like representations (Dordek et al., 2016; Sorscher et al., 2019). Furthermore, recurrent neural networks form grid-like representations through learning path integration, and those representations support efficient spatial navigation (Banino et al., 2018; Cueva & Wei, 2018; Gao et al., 2019). Some of those models have reproduced experimentally observed scaling ratios between grid cell modules (Banino et al., 2018; Sorscher et al., 2019). However, previous models have not been applied to learning of linguistic concepts, or other complex conceptual spaces in real-world data. Whittington et al. (2020) proposed a unified model for spatial and nonspatial cognition. However, their model was applied only to simple graph structures and conceptual specificity like our model was not observed.
Analogical inference by our model is a same function as word embedding methods in NLP (Mikolov et al., 2013a;b; Pennington et al., 2014; Levy & Goldberg, 2014). However, a unique feature of DSI representations is that each dimension of vectors corresponds to a specific concept like concept cells in the human brain (Quiroga, 2012; Reber et al., 2019). Our model provides biological plausible interpretation of word embedding: each word is represented by combination of disentangled conceptual units, inference is recombination of those concepts, and such representations emerge through the same constraints with grid cells. It was recently shown that transformer-based models (Vaswani et al., 2017; Brown et al., 2020), which are currently state-of-the-art models in NLP, generate grid-like representations when applied to spatial learning (Whittington et al., 2022). Similarly to our model, this finding implies the relationship between spatial and linguistic processing in the brain. However, concept-specific representations has not been found in such model. Furthermore, clear theoretical interpretation in this study depends on the analytical solution for skip-gram (Levy & Goldberg, 2014). Such analytical solution is currently unknown for transformer-based models.
3 MODEL
3.1 DISENTANGLED SUCCESSOR INFORMATION
Let us assume Ns discrete states exist in the environment. Successor representation (SR) between two states s and s′ is defined as
SR(s, s′) = E [ ∞∑ t=0 γtδ(st, s ′)|s0 = s ] = ∞∑ t=0 γtP (st = s ′|s0 = s), (1)
where δ(i, j) is Kronecker’s delta and γ is a discount factor. We describe how we calculate SR in this study in Appendix A.1. SR and its dimension-reduced representations have been viewed as models of hippocampus and entorhinal cortex, respectively (Stachenfeld et al., 2017).
Based on SR, we define successor information (SI) and positive successor information (PSI) as
SI(s, s′) = log(SR(s, s′))− log(P (s′)), (2) PSI(s, s′) = max{SI(s, s′), 0}. (3)
In this study, we regard this quantity as a hippocampal model instead of SR (Figure 1A).
Next, we introduce a novel dimension reduction method which we call decorrelative non-negative matrix factorization (decorrelative NMF). Decorrelative NMF can be regarded as a variant of NMF (Lee & Seung, 1999) with additional constraints of decorrelation. By applying decorrelatve NMF to PSI, we obtain representation vectors called as disentangled successor information (DSI), which we regard as a model of EC (Figure 1A). In decorrelative NMF, we obtain D-dimensional vectors x(s) and w(s) (D < Ns) by minimization of the following objective function
J = 1
2 ∑ s,s′ ρ(s, s′)(PSI(s, s′)− x(s) ·w(s′))2
+ 1
2 βcor ∑ i ̸=j (Corr(i, j))2 + 1 2 βreg ∑ s (||x(s)||2 + ||w(s)||2), (4)
subject to non-negative constraints ∀i, xi(s) ≥ 0, wi(s) ≥ 0. ρ(s, s′) is a weight for the square error
ρ(s, s′) = 1
NsV
( 1
M PSI(s, s′) + ρmin
) , (5)
where M and V are mean and variance of PSI, respectively, and ρmin is a small value to avoid zero-weight. Corr(i, j) is a correlation between two dimensions in x(s)
Corr(i, j) = ∑ s x̃i(s)x̃j(s)√∑
s(x̃i(s)) 2 ∑ s(x̃j(s)) 2 , (6)
where x̃i(s) = xi(s) − 1Ns ∑
s xi(s). The first term of the objective function is weighted approximation error minimization, the second term works for decorrelation between dimensions, and the third term regularizes representation vectors. Optimization was performed by Nesterov’s accelerated gradient descent method (Nesterov, 1983) with rectification of xi(s), wi(s) every iteration. We describe additional details in Appendix A.2.
3.2 RELATIONSHIPS WITH REINFORCEMENT LEARNING AND WORD EMBEDDING
We show dual interpretation of our model. On the one hand, DSI approximates value estimation of linear reinforcement learning, thus support goal-directed decision making and navigation. On the
other hand, the same representation approximates word embedding in NLP, thus support semantic computation.
First, our model approximates value functions of linear reinforcement learning (Todorov, 2006; 2009; Piray & Daw, 2021) in the setting of spatial navigation. Linear reinforcement learning assumes default policy and imposes additional penalty on deviation from default policy, then we can obtain value functions explicitly by solving linear equations. Let us consider a specific condition in which the environment consists of non-terminal states, and a virtual terminal state is attached to a goal state sG arbitrarily chosen from non-terminal states (Figure 1B). When the agent gets to the goal, it transits to the terminal state with a probability pNT . Furthermore, we assume that reward at non-terminal states are uniformly negative and reward at the terminal state is positive so that the agent has to take a short path to goal to maximize reward. In this setting, we can obtain value functions v∗(s) in linear reinforcement learning as
λ−1v∗(s) = log(SRd(s, sG))− logP d(sG) = SId(s, sG) ≈ x(s) ·w(sG), (7)
where SRd(s, sG) and SId(s, sG) are SR and SI under the default policy, respectively. We describe details of derivation in Appendix A.3. Therefore, SI is proportional to value functions for spatial navigation and inner products of DSI vectors approximates value functions. Based on this interpretation, we basically regard x(s) as a representation of each state, and w(s) represents a temporary goal.
Second, DSI is related to word embedding methods in NLP (Mikolov et al., 2013a;b; Pennington et al., 2014; Levy & Goldberg, 2014). In linguistics, pointwise mutual information (PMI) and positive pointwise mutual information (PPMI) are used to measure the degree of coincidence between two words (Levy & Goldberg, 2014). They are defined as
PMI = log
( P (wordi, wordj)
P (wordi)P (wordj)
) , (8)
PPMI = max{PMI, 0}, (9)
where P (wordi, wordj) is a coincidence probability of two words (in a certain temporal window). It has been proven that dimension reduction of PMI approximates a word embedding method skipgram (Mikolov et al., 2013a;b), and similar performance is obtained using PPMI (Levy & Goldberg, 2014). GloVe (Pennington et al., 2014) is also based on this perspective. SI can be written as
SI(s, s′) = log(SR(s, s′))− log(P (s′)) = log (∑∞ t=0 γ tP (st = s ′, s0 = s)
P (s)P (s′)
) . (10)
In this formulation, we can see mathematical similarity between PMI and SI by regarding words as states (s = wordi, s′ = wordj), thus the correspondence between PPMI and PSI. Because of this relationship, we can expect that DSI, which is obtained through dimension reduction of PSI, has similar properties to word embedding methods. The difference is how to count coincidence: the coincidence in SI is evaluated with an asymmetric exponential kernel as in SR, in contrast that a symmetric rectangular temporal window is often used in typical word embedding (see Appendix A.4 for further detail).
3.3 DECORRELATIVE NMF RELATES TO GRID CELLS AND DISENTANGLEMENT
Constraints in decorrelative NMF (non-negativity, decorrelation (or orthogonality), and regularization) are important for generation of grid cells, as shown in previous theoretical studies on grid cells. (Dordek et al., 2016; Cueva & Wei, 2018; Banino et al., 2018; Gao et al., 2019; Sorscher et al., 2019). They are also biologically plausible because neural activity is basically non-negative and decorrelation is possible through lateral inhibition. On the other hand, non-negativity (Oja & Plumbley, 2004) and decorrelation (Hyvärinen & Oja, 2000) are also important for extraction of independent components, and it is known that imposing independence in latent space of deep generative models results in the emergence of disentangled representations for visual features (Higgins et al., 2017). Therefore, in word embedding, we expected that those constraints help emergence of independent and disentangled units for linguistic concepts. Constraints in decorrelative NMF are actually crucial for results obtained in this study (Appendix A.5).
As disentangled visual representations explain single-cell activities in higher-order visual cortex (Higgins et al., 2021), we may similarly interpret conceptual representations in our model as concept cells in the human medial temporal lobe (Quiroga, 2012). Previous studies suggest that each concept cell respond to a specific concept, whereas population-level activity patterns represent abstract semantic structures (Reber et al., 2019). Such property is consistent with the factorized and distributed nature of disentangled representation vectors.
4 LEARNING REPRESENTATIONS OF PHYSICAL SPACES
In this section, we empirically show that DSI model forms biologically plausible grid-like representations in a 2-D physical space, and they support spatial navigation. These results also apply to conceptual spaces with the 2-D structure, depending on the definition of states.
4.1 LEARNING PROCEDURE
As an environment, we assumed a square room tiled with 30×30 discrete states (Figure 2A). In each simulation trial, an agent starts at one of those 900 states and transits to one of eight surrounding states each time except that transitions are limited at states along the boundary (the structure was not a torus). Transitions to all directions occur with an equal probability. We performed 500 simulation trials and obtained a sequence of 100,000 time steps in each trial. We calculated occurrence probabilities (P (s)) and a successor representation matrix (SR(s, s′)) of 900 states from those sequences, and calculated PSI and DSI (100-dimensional) as described in the Model section. The discount factor γ was set to 0.99. We additionally tested spatial navigation in a structure with separated and interconnected rooms (see Figure 3C). In that case, we used the discount factor γ = 0.999.
4.2 EMERGENCE OF GRID-LIKE REPRESENTATIONS
Here we call each dimension of DSI representation vectors x(s) as a neural ”unit”, and we regard a value in each dimension at each state as a neural activity (or a neural representation). As shown in Figure 2B, many units exhibited grid-like activity patterns in the space. We performed a gridness analysis that has been used in animal experiments (Sargolini et al., 2006) and found that 51% of units were classified as grid cells. Similarly, 53% of units in w(s) were classified as grid cells.
Furthermore, we checked whether DSI representations in the physical space reproduce a property of biological grid cells. Actual grid cells in the rat brain exhibit multiple discrete spatial scales and the ratio between grid scales of adjacent modules is √ 2 (Stensola et al., 2012). We constructed a distribution of grid scales of DSI units by kernel density estimation, which revealed that multiple peaks of grid scales existed and the ratio between grid scales of adjacent peaks was √ 2 (Figure 2C). These results show that DSI model constructs biologically plausible grid-like representations in the 2-D physical space. We describe details of analysis methods in Appendix A.6.
4.3 NEAR-OPTIMAL SPATIAL NAVIGATION BY DSI VECTORS
As discussed in Section 3.2, the inner product of DSI representations approximate value functions for spatial navigation. Therefore, we tested whether DSI representations actually enable nearoptimal navigation in the space.
We assume that a start location (state sinit) and a goal location (state sG) are randomly given in each trial such that the shortest path length is minimally 10, and an agent has to navigate between them. To solve the task, we define a vector-based state transition rule. Suppose that the agent exists at a state s, and a set of neighboring states of s is A(s). Given the goal representation vector w(sG), a value function of a neighboring state snext ∈ A(s) is estimated by x(snext) · w(sG), and the agents transits to the state that has a maximum value. This state transition rule can be geometrically interpreted as the choice of movements that has the closest angle with the goal vector in the representation space (Figure 3A). Otherwise, we can interpret that the agent estimates value functions by linear readout from grid-like DSI representations. Because of the approximation error, this rule did not always give optimal navigation (the shortest path from the start to the goal). However, the agent could take the shortest path to the goal in 93.9% of 1,000 trials we tested (an example is shown in
(
√
2)n
, respectively.
Figure 3B). Furthermore, 97.2% were near-optimal navigation in which the actual path length was shorter than 1.1 times the shortest path length. The same framework also worked in a relatively complex environment with separated rooms (Figure 3C). In this environment, the ratio of optimal and near-optimal navigation was 68% and 82.6%, respectively. We also confirmed that we can perform path integration based on DSI representations using movement-conditional recurrent weights (McNaughton et al., 2006; Burak & Fiete, 2009; Oh et al., 2015; Gao et al., 2019) (Appendix A.7). These results show that DSI representations can support spatial navigation, which corresponds to the contribution of biological grid cells for spatial navigation.
4.4 VECTOR-BASED INFERENCE OF SPATIAL CONTEXTS
We additionally found that we can perform vector-based inference for spatial navigation in a novel context. First, we constructed DSI representation vectors in spatial contexts A and B, each of which has a barrier (Figure 4A). Then, we created representation vectors for a novel context A+B with two barriers by simply adding representation vectors for familiar contexts A and B (Figure 4A). We tested vector-based navigation (described in the section 4.3) in three spatial contexts A, B, and A+B, using one of three representations for A, B, and A+B. Naturally, representation vectors for A and B gave the best performance in contexts A and B, respectively (Figure 4B). Notably, composite representation vectors for A+B achieved the best performance in the context A+B (Figure 4B). This
result suggests that we can utilize vector-based composition of representations for a novel spatial context. We describe details of the simulation in Appendix A.8.
Additional analysis by multidimensional scaling (MDS) suggest that summing DSI vectors leads to composition of an appropriate metric space for the novel context (Appendix A.9). This is potentially useful for composing multiple constraints that change reachability between states in various tasks (such as control of robotic arms and playing computer games), like composition of tasks in soft-Q learning (Haarnoja et al., 2018; Makino).
5 LEARNING REPRESENTATIONS OF CONCEPTUAL SPACES
In this section, we show that the same DSI model can learn representations for a complex conceptual space from linguistic inputs, and those representations support vector-based conceptual inference.
5.1 LEARNING PROCEDURE
We used text data taken from English Wikipedia, which contains 124M tokens and 9376 words (see Appendix A.10 for the detail of preprocessing). To construct DSI representations, we regarded each word as a “state”, and considered the text data as a sequence of 9376 states (Ns = 9376). Then, we applied the exactly same learning procedure as in the experiment of physical spaces. We obtained 300-dimensional DSI representation vectors for each word. The discount factor γ was set to 0.9. The setting of other parameters was the same as the experiment of physical spaces.
5.2 EMERGENCE OF CONCEPT-SPECIFIC REPRESENTATIONS
As in the previous section, we regard each dimension of representation vectors as a neural unit, and checked how various words activate those units. Specifically, we listed ten words that elicited the highest activities in each unit (TOP-10 words). Consequently, we found that many units are activated by words related to specific concepts (Figure 5; other examples in Appendix A.12), which could be named as “game cell” or “president cell”, for example. We quantified this conceptual specificity through WordNet-based semantic similarity between words (Princeton University, 2010). We compared mean similarity among TOP-10 words and a null distribution of similarity between random word pairs, by which we determined statistically significant concept-specific units and quantified the degree of conceptual specificity of each unit (see Appendix A.11 for details). DSI exhibited the larger number of significantly concept-specific units and higher average conceptual specificity than other well-established word embedding methods such as skip-gram and GloVe (Table 1) (Mikolov et al., 2013a;b; Pennington et al., 2014; Levy & Goldberg, 2014). We also analyzed conceptual specificity of representations in the embedding layer of pretrained BERT model (bert-base-uncased in Hugging Face transformers) (Devlin et al., 2018; Wolf et al., 2020), which was lower than DSI (Table1). This result shows that our DSI model forms more concept-specific representations than other models.
Additional analyses revealed that word representation vectors are non-sparse and distributed (Appendix A.15). Therefore, each word is represented by the combination of concept-specific units
METHOD EVALUATED SIGNIFICANT RATIO SPECIFICITY
shared by several related words. For example, ”France” can be represented by the combination of units which we could name French cell and country cell (Appendix A.15).
5.3 VECTOR-BASED COMPUTATION IN THE CONCEPTUAL SPACE
Given that DSI and word embedding methods are mathematically similar (Section A.4), we expect that DSI vectors have similar properties to representation vectors learned by those word embedding methods. We evaluated the performance of DSI vectors in two tasks that have been used to evaluate word embedding methods: word similarity and analogical inference (Mikolov et al., 2013a;b; Pennington et al., 2014; Levy & Goldberg, 2014). In the word similarity task, we calculated cosine similarity between representation vectors of word pairs, and evaluated the rank correlation between those cosine similarities and human word similarities (WS353 dataset (Agirre et al., 2009); 248/345 word pairs were used). In the analogical inference task, we performed calculation of vectors such as x(king) − x(man) + x(woman) and checked whether the resultant vector has the maximum cosine similarity with x(queen) (Mikolov’s dataset (Mikolov et al., 2013a;b); 3157/19544 questions were used; examples in Appendix A.13). The result shows that DSI vectors achieved comparable performance with other well-established word embedding methods (Table 2).
This result indicates that similarity of DSI representation vectors corresponds to semantic similarity. This property is consistent with the experimental observation that population-level pattern similarity of concept cell activities represents semantic categories (Reber et al., 2019). By visualizing the structure of DSI representations by MDS, we can actually see clustering of words corresponding to 10 semantic categories used in Reber et al. (2019) (Appendix A.14). Furthermore, conceptual inference is possible through arithmetic composition of DSI vectors.We additionally found that this inference is intuitive recombination of concept-specific units in some cases. For example, transformation from ”Paris” to ”France” corresponds to activation of country cell and deactivation of capital cell, which is possible by summing the difference of Germany and Berlin vectors. (Appendix A.15).
METHOD SIMILARITY ANALOGY
6 DISCUSSION
In this paper, we proposed a theoretically interpretable and biologically plausible neural representation model for physical and conceptual spaces. We demonstrated that our DSI model forms grid-like representations in the physical space and concept-specific representations in the linguistic space, which are assumed to correspond to neural representations in EC. Furthermore, we showed that SI is mathematically related to linear reinforcement learning and word embedding methods, thus DSI representations support spatial navigation and conceptual inference. These results suggest that we can extend the spatial representation model of EC to learn and compute linguistic concepts, which apparently seems a different computational domain from physical space.
In the section 5.2, we demonstrated concept-specific representations created from text data. To the best of our knowledge, such property has not been reported in any word embedding methods. However, we unexpectedly found that continuous-bag-of-words (CBOW) showed relatively high conceptual specificity. Although DSI is related to PMI, skip-gram and GloVe, we have not found relationship to CBOW. Further clarification of necessary conditions for conceptual specificity is still open problem.
Although DSI has clear mathematical interpretation, how biological neural networks can learn DSI is still unclear. A possible solution is an extension of skip-gram neural network with SR, nonnegativity, and decorrelation. Because SI corresponds to PMI which is the optimum of skip-gram neural network, we can expect that extended skip-gram network learns DSI. Building such model and relating it to the circuit mechanism in hippocampus and EC are left for future research.
Our model relates word embedding to conceptual representations in the brain. Previously study showed that skip-gram representations support high-performance decoding of semantic information from fMRI data (Nishida & Nishimoto, 2018). Another study revealed that hippocampal theta oscillation codes semantic distances between words measured in word2vec subspace (Solomon et al., 2019). These experimental results support our hypothesis. However, recent studies have shown that representations in transformer-based models (Vaswani et al., 2017) such as GPT (Brown et al., 2020) achieve remarkable performance in linear fitting to neural recording during linguistic processing (Goldstein et al., 2022; Schrimpf et al., 2021). A major difference between our DSI model and transformer-based models is that DSI representations are basically fixed (static embedding) whereas transformer-based models flexibly create context-dependent representations (dynamic embedding). Conceptual interpretation obviously depends on the context, thus activities of concept cells are context-dependent (Bausch et al., 2021). Therefore, our DSI model should be extended to process context-dependence, hopefully by combination with other models for learning contextdependent latent cognitive states (Uria et al., 2020; George et al., 2021; Whittington et al., 2020).
Another direction of future research is application to general conceptual spaces by learning DSI representations from low-level sensory inputs, like spatial learning from visual and auditory inputs in previous models (Banino et al., 2018; Taniguchi et al., 2018; Uria et al., 2020). It may be possible by learning discrete states by unsupervised clustering for deep networks (Caron et al., 2018). As for the human brain, infants probably form primitive spatial and conceptual representations from sensory signals, and later linguistic inputs enrich those representations. We speculate that real-world sensory data also contain the information of the conceptual space, for which DSI can be extended to learn those structures. Such model would clarify the role of the hippocampal system in computation of general conceptual spaces.
A APPENDIX
A.1 CALCULATION OF SR
SR is a variant of value functions in reinforcement learning, thus we can use various methods such as temporal-difference (TD) learning for the construction. Throughout this study, we used a direct count method because we performed only offline processing of finite data. In a sequence of states {s1, . . . , st, . . . , sT }, we recursively calculated exponential traces of past states z(s, t) = ∑t−1 τ=0 γ τδ(st−τ , s) as
z(s, t) = γz(s, t− 1) + δ(st, s), (11)
and calculated SR from state counts and coincidence counts as
SR(s, s′) =
∑T t=1 z(s, t)δ(st, s
′)∑T t=1 δ(st, s) . (12)
A.2 DETAILS OF DECORRELATIVE NMF
In decorrelative NMF, we iteratively updated vectors x(s) and w(s) by Nesterov’s accelerated gradient descent method to minimize the objective function (Eq. 4), rectifying all elements every iteration. Gradients are
∂J
∂xk(s) = − ∑ s′ ρ(s, s′)(PSI(s, s′)− x(s) ·w(s′))wk(s′)
+ βcor ∑ j ̸=k Corr(k, j)x̃j(s)√∑ s(x̃k(s)) 2 ∑ s(x̃j(s)) 2 + βregxk(s), (13)
∂J
∂wk(s′) = − ∑ s PSI(s, s′)(PSI(s, s′)− x(s) ·w(s′))xk(s) + βregwk(s′). (14)
We note that we regarded mean and variance of xi(s) in the correlation ( 1Ns ∑ s xi(s), ∑ s(x̃i(s)) 2 in Eq. 6) as constants in the calculation of these gradients. Practically, this heuristic did not affect the performance of decorrelation.
Throughout this paper, the learning rate was 0.05 and the number of iteration was 10000. Parameters were βcor = 1, βreg = 0.001, and ρmin = 0.001.
A.3 MATHEMATICAL RELATIONSHIP OF DSI AND REINFORCEMENT LEARNING
In this section, we show that our model approximates value functions of linear reinforcement learning (Todorov, 2006; 2009; Piray & Daw, 2021) in the setting of spatial navigation.
In linear reinforcement learning, an agent aims to maximize “gain” instead of reward. Assuming a default policy πd(s) (any policy is available; typically random walk in the case of exploration task), gain function is defined as
g(s) = r(s)− λKL(π(s)|πd(s)), (15)
where r(s) is expected reward at the state s and λKL(π(s)|πd(s)) is the cost imposed on the difference between the current policy π(s) and the default policy πd(s) (λ is a relative weight of the cost). Then, previous works have shown that the optimal policy and corresponding value functions can be determined explicitly by solving linear equations (Todorov, 2006; 2009; Piray & Daw, 2021). Here we consider an environment that consists of NN non-terminal states and NT terminal states. We define two transition probability matrices under the default policy: PNT is a NN × NT matrix for
transitions from non-terminal states to terminal states, and PNN is a NN×NN matrix for transitions across non-terminal states. Furthermore, rN and rT are vectors of rewards at non-terminal states and terminal states, respectively. In this condition, a vector of value functions under optimal policy v∗ = (v∗(s1), . . . , v ∗(sNN )) is obtained as
exp(λ−1v∗) = MPNT exp(λ −1rT ), (16)
where M = (diag(exp(−λ−1rN ))− PNN )−1 is DR (Piray & Daw, 2021). To relate v∗ to SI, we consider a specific condition in which the environment consists of non-terminal states, and a virtual terminal state is attached to a goal state sG arbitrarily chosen from non-terminal states (Figure 1B). When the agent gets to the goal, it transits to the terminal state with a probability pNT . Furthermore, we assume that reward at non-terminal states are uniformly negative and reward at the terminal state is positive so that the agent has to take a short path to goal to maximize reward. Specifically, we assume all elements of rN are λ log γ, and rT = −λ(log γ+log pNT +logP d(sG)) where γ is an arbitrary value in the range (0, 1), and P d(sG) is a probability of visiting the state sG under the default policy. Then, we obtain
exp(λ−1v∗) = 1
P d(sG) (I − γPNN )−1e(iG), (17)
where e(iG) = (0, . . . , 0, 1, 0, . . . , 0)T (iG is the index of the goal state). Because (I−γPNN )−1 is equivalent to a successor representation matrix with a discount factor γ (Dayan, 1993; Stachenfeld et al., 2017), we finally obtain
λ−1v∗(s) = log(SRd(s, sG))− logP d(sG) = SId(s, sG) ≈ x(s) ·w(sG), (18) where SRd(s, sG) and SId(s, sG) are SR and SI under the default policy, respectively. Thus, SI is proportional to value functions for spatial navigation and inner products of DSI vectors approximates value functions. Based on this interpretation, we basically regard x(s) as a representation of each state, and w(s) represents a temporary goal.
A.4 MATHEMATICAL RELATIONSHIP OF DSI AND WORD EMBEDDING
In this section, we discuss the relationship of SI and PMI (Levy & Goldberg, 2014) in detail. PMI is
PMI = log
( P (wordi, wordj)
P (wordi)P (wordj)
) , (19)
where P (wordi, wordj) is a coincidence probability of two words (in a certain temporal window).
To relate PMI to SI, we regard words as states: s = wordi, s′ = wordj . Furthermore, we consider a specific way to count coincidence probability. In typical word embedding, a finite symmetric rectangular window is often used:
P (s, s′) = W∑ t=0 P (st = s ′, s0 = s), (20)
where W is a window size. Here, we implicitly assumed that same state (word) is not repeated in the temporal window to guarantee that P(s, s’) is probability.
However, we may arbitrarily calculate coincidence for P (s, s′). Here we evaluate coincidence with an infinite asymmetric exponential kernel as in SR:
P (s, s′) = (1− γ) ∞∑ t=0 γtP (st = s ′, s0 = s). (21)
We introduced a normalization factor (1 − γ) to guarantee that P (s, s′) is less than one ((1 − γ) ∑∞ t=0 γ t = 1). Then, PMI becomes
PMI = log
( (1− γ) ∑∞ t=0 γ tP (st = s ′, s0 = s)
P (s)P (s′)
) (22)
= log(SR(s, s′))− log(P (s′)) + log(1− γ) (23) = SI(s, s′) + log(1− γ). (24)
If we perform dimension reduction, log(1 − γ) can be ignored because it is a constant. Therefore, we can interpret SI as a special case of PMI in our model.
A.5 RELATIONSHIP BETWEEN MODEL COMPONENTS AND REPRESENTATIONS
To clarify the contribution of each model component to the results in this study, we performed a “lesion study” in which we removed some components in DSI and repeated the same evaluation procedure in the main text. We summarize results in Table 3. First, we tested representations obtained by singular value decomposition of successor representation (SR-SVD), which was regarded as a model of grid cells in a previous study (Stachenfeld et al., 2017). DSI model exceeded SR-SVD in all aspects shown in this study. Next, we tested DSI model without decorrelation (βcor = 0) and DSI model without non-negativity (no rectification of representation vectors). Neither modification impaired the performance of navigation and inference, showing the importance of using SI for vector-based computations as theoretically expected. In contrast, removing decorrelation and non-negativity significantly impaired the emergence of grid-like units and concept-specific units, respectively. Thus, decorrelative NMF is crucial to obtain biologically plausible representations.
A.6 DETAILS OF EVALUATION OF GRID REPRESENTATIONS
In the section 4.2, we performed the gridness analysis following a previous experimental study (Sargolini et al., 2006). For each unit, we rotated the spatial autocorrelation map (Figure 2B, lower) and calculated correlations between the original and rotated maps. Gridness was defined as the difference between the lowest correlation at 60◦ and 120◦ and the highest correlation at 30◦, 90◦ and 150◦. A unit was classified as a grid cell when gridness exceeds zero.
In Figure 2C, we constructed a distribution of grid scales. Grid scales were determined as the median of distances between the central peak and the six closest peaks (vertices of inner hexagon) in the spatial autocorrelation map. The kernel function for kernel density estimation was Gaussian with a standard deviation 1.
A.7 PATH INTEGRATION BY DSI
We performed path integration based on DSI representations using movement-conditional recurrent weights. This strategy has been used in previous studies such as grid cell modeling (Gao et al., 2019) and action-conditional video prediction (Oh et al., 2015). This mechanism is also consistent with a conventional biological model for path integration in which head direction signals activate one of attractor networks specialized for different directional shifts of grid patterns (McNaughton et al., 2006; Burak & Fiete, 2009).
We made an estimate of the next representation vector x̂t+1 by linear transformation of the current representation vector x(st) as
x̂t+1 = M(at)x(st) (25)
where at represents a movement (one of eight directional movements in this study) and M(at) is movement-conditional recurrent weight matrix. Here, x(st) was a DSI representation vector, and we optimized the matrix M(at) by minimizing prediction error ||x(st+1) − M(at)x(st)||22 by stochastic gradient descent during random walk on the state transition graph (20 simulation trials of 100,000 time steps). After optimization, we set an initial state s0 and a sequence of movements {a0, a1, . . . , aT−1}, and performed path integration by recursive estimation x̂t+1 = M(at)x̂t. We determined a position at each time step by searching a state representation vector that has minimum Euclidian distance with the estimated vector (st = argmins ||x(s)− x̂t||2). As shown in Figure 6, this strategy gave accurate estimation of the spatial path from movement signals.
A.8 DETAILS OF VECTOR-BASED INFERENCE OF THE SPATIAL CONTEXT
In the section 4.4, we performed vector-based inference for spatial navigation in a novel context. Specifically, we define separated states in contexts A, B, A+B as sAi , s B i and s A+B i , where i is a positional index which indicates a same position in all contexts (i = 1, 2, · · · , 900). We constructed representation vectors x(sAi ), w(s A i ), x(s B i ), and w(s B i ) through direct experiences, then we created x(sA+Bi ) and w(s A+B i ) as
x(sA+Bi ) = x(s A i ) + x(s B i ), (26)
w(sA+Bi ) = w(s A i ) +w(s B i ). (27)
We performed spatial navigation in a given context using one of three representations {x(sAi ),w(sAi )}, {x(sBi ),w(sBi )}, and {x(s A+B i ),w(s A+B i )} for corresponding positions, following the rule described in the section 4.3. In figure 7, we show structures of state transition graphs for three contexts A, B, and A+B.
To learn representations in contexts A and B, we sampled sequences of {sAi }i=1,··· ,900 and {sBi }i=1,··· ,900 by random walk in context A and B. The procedure was basically same with the section 4 except that we increased the number of simulation trials from 500 to 1,000, and state transition to the same position in the other context occurred every 5,000 time steps (transition between sAi and s B i ). We added this transition to associate the same position in different contexts. It means that we assumed that the setting of barriers can change during the experience but this temporal association may be substituted by similarity of sensory inputs across contexts. From sampled sequences, we calculated PSI for all combinations of {sAi }i=1,··· ,900 and {sBi }i=1,··· ,900, and calculated 100- dimensional DSI vectors for 1,800 states by simultaneous compression of all states. The discount factor γ was set to 0.999.
A.9 VISUALIZATION OF SPATIAL STRUCTURES REPRESENTED BY DSI VECTORS
In Figure A.9, we visualized metric spaces defined by representation vectors for contexts A and B, and composite vectors for the context A+B, by using multidimensional scaling (MDS). This visualization clearly shows that DSI vectors for A and B capture structures of spatial contexts A and B, and adding those vectors yields appropriate metric space for novel context A+B.
A.10 DETAILS OF PREPROCESSING OF TEXT DATA
In the section 5, we used text data taken from English Wikipedia dump (enwiki-latest-pagesarticles, 22-May-2020). We first generated text files from raw data using wikiextrator (https: //github.com/attardi/wikiextractor). We tokenized texts by nltk Punkt sentense tokenizer, and randomly sampled 100,000 articles containing 1,000 tokens at minimum. We lowercased all characters and removed punctuation characters in the data. After that, we selected words that appeared more than 1,000 times in the data, and substituted all other rare words by <unk> symbol. Finally, we obtained data that contains 124M tokens and 9376 words.
A.11 DETAILS OF EVALUATION OF CONCEPTUAL SPECIFICITY
In the section 5.2, conceptual specificity of each unit was evaluated using WordNet database (Princeton University, 2010). In WordNet, a word belongs to several synsets (sets of cognitive synonyms), and semantic similarity of two synsets can be evaluated from the shortest path length between them in the WordNet structure (we used path similarity function in nltk library). We defined similarity of two words as the highest similarity among all combinations of synsets of those words. We calculated mean similarity of all combinations of TOP-10 words (ten words that highly activated the unit; Figure 5A) that are available in WordNet. We evaluated only units which had at least five TOP-10 words available in WordNet. Furthermore, we randomly generated 1,000 pairs of words available in WordNet, and generated a null distribution of similarity between words. We defined a significance threshold of similarity as a 95 percentile of the null distribution, and a unit was classified as a significantly concept-specific unit if mean similarity of TOP-10 words exceeded the threshold. Furthermore, we quantitatively defined a conceptual specificity of each unit as
sunit snull − 1, (28)
where sunit is mean similarity of TOP-10 words and snull is the mean of the null distribution. This quantity becomes zero if similarity between TOP-10 words is not different from random pairs, and becomes positive if TOP-10 words are semantically similar. This conceptual specificity was averaged over all evaluated units.
A.12 EXAMPLE DSI REPRESENTATIONS FOR WORDS
In the figure 9, we show TOP-10 words of DSI units without manual selection. We found several non-significant units exhibit conceptual specificity according to manual inspection (for example, unit4 may be named as university cell). This is probably because of the limitation of knowledge covered by WordNet. Therefore, we suppose that the current evaluation method tends to underestimate the number of concept-specific units. However, the comparison across models was fair because we used the same procedure and criteria for all models.
A.13 EXAMPLES OF THE ANALOGICAL INFERENCE TASK
In table 4, we show some examples of the analogical inference in Mikolov’s dataset. There is a relationship “WORD1 is to WORD2 as WORD3 is to WORD4”. Then, an expected relationship in the vector space is WORD2-WORD1=WORD4-WORD3. In this study, we performed inference of WORD4 by WORD3+WORD2-WORD1. We regarded an inference was correct if the actual vector of WORD4 had the largest cosine similarity to the inferred vector among all word representation vectors (except those for WORD1, WORD2, and WORD3). If the number of words is 10,000, a chance level of the correct answer rate is 0.01%. Therefore, the performance shown in this study (more than 50%) is far above the chance level.
A.14 CLUSTERING OF SEMANTIC CATEGORIES IN DSI SPACE
Figure 10 shows the structure of DSI word representations visualized by MDS. We arbitrarily chose words based on 10 semantic categories used in Reber et al. (2019). We used same dissimilarity metric with Reber et al. (2019) (1 - Pearson’s correlation coefficient).
A.15 INTUITIVE MECHANISM OF WORD REPRESENTATIONS BY DSI
In this section, we discuss how DSI vectors represent and compute words.
First, we analyzed the ratio of each element to the sum of all elements in DSI vectors. We found that even the largest element accounted for 5% of the sum of all elements on average. (Figure 11). This result shows that DSI vectors for words are non-sparse and distributed, thus each word is represented by the combination of multiple conceptual units.
Next, for further clarification, we inspected representations of an example set of words: France, Paris, Germany and Berlin. We can see there are two analogical relationships (country-capital and French-German relationships). We identified the most active units (TOP-2) in DSI vectors for those words, and listed TOP-10 words for identified units. As a result, we could see that “France” is represented by the combination of units that we could name as French cell and country cell, whereas
“Berlin” is represented by the combination of German cell and capital cell, and so on (Figure 12). This example also gives a simple interpretation of word similarity in DSI vector space. If words are similar, they share large number of active units, like the country cell shared by representations of France and Germany. Thus, semantic similarity between words increases cosine similarity between word vectors.
Furthermore, we also identified the largest elements (the largest absolute values) in the difference vectors between words, and found that they correspond to semantic difference between words (Figure 12). Thus, we can regard analogical inference by DSI vectors as recombination of conceptual units. For example, adding Germany-Berlin vector to Paris vector deactivate capital cell and activate country cell, which leads to the transformation of Paris into France.
Such property of the vector space is same as conventional word embedding methods, but unique feature of our model is that those analogical relationships are factorized into separated units. We speculate that constraints of decorrelative NMF are sufficient conditions to align each semantic factors to each axis of the word vector space, and the mechanism is probably related to how disentangled representations emerge in visual feature learning model (Higgins et al., 2017; Carbonneau et al., 2020). | 1. What is the focus of the paper, and what are the authors' contributions to understanding neural representation?
2. What are the strengths and weaknesses of the proposed framework, particularly regarding its application to grid cells and word embedding?
3. Do you have concerns about the technical content or the motivations behind certain equations?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
5. Are there any relevant works missing from the discussion that could provide additional context or comparison? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
The authors develop a “disentangled successor information” framework for understanding the neural representation for physical and conceptual spaces. The paper then applied this framework to study two problems: grid cells and word embedding. Overall, this paper contains some interesting and intriguing ideas. But the technical content is not entirely rigorous.
Strengths And Weaknesses
Strength:
The paper studies both grid cells and word embedding problems.
The attempt to link grid cells and word embedding is ambitious.
Using successor information to study the grid cells is an interesting and novel idea.
Main concerns:
Why decorrelative NMF is a good objective function for the grid cell system is unclear. Eq. (6) needs better motivations.
It is unclear what determines the structure of the emerging representation. Does the word embed similarly lead to a grid representation? If not, why?
The improvement of this model (in the case of word embedding ) compared to prior work is in fact very subtle (e.g., Table 2).
How is the present work different from prior work, e.g., Stachenfeld et al 2017, and Dordek et al 2016 is unclear. This paper performs matrix factorization, while previous work did PCA. Are they really different? This needs to be better explained.
The similarity between Eq 13 and Eq 15 seems to be superficial. If they are indeed equivalent, the implication seems to be that it should be possible to use an objective function based on Eq. 13 to derive the grid cells. But this is not discussed or shown.
Interpreting an information-theoretical quantity as the neural activity seems to be risky. (I am actually not sure if it makes any sense.) This crucial assumption needs to be better justified.
Other comments: In Dordek et al., 2016; Sorscher et al., 2019, in addition to the non-negativity, the inhibition surround is also critical for the grid firing fields to emerge. This point should be discussed in more detail. Several relevant papers on understanding grid cells with machine learning approaches are missing, e.g., Cueva & Wei, ICLR, 2018; Gan, Xie, Zhu, Wu, ICLR, 2019. In Section 4.2, this explanation is not the first that attempts to theoretically explain the grid responses and the scale ratio. Prior work on this should be discussed.
Clarity, Quality, Novelty And Reproducibility
The clarify, quality and rigor needs substantial further improvement. |
ICLR | Title
Unified neural representation model for physical and conceptual spaces
Abstract
The spatial processing system of the brain uses grid-like neural representations (grid cells) for supporting vector-based navigation. Experiments also suggest that neural representations for concepts (concept cells) exist in the human brain, and conceptual inference relies on navigation in conceptual spaces. We propose a unified model called “disentangled successor information (DSI)” that explains neural representations for physical space and linguistic concepts. DSI generates grid-like representations in a 2-dimensional space that highly resemble those observed in the brain. Moreover, the same model creates concept-specific representations from linguistic inputs, corresponding to concept cells. Mathematically, DSI vectors approximate value functions for navigation and word vectors obtained by word embedding methods, thus enabling both spatial navigation and conceptual inference based on vector-based calculation. Our results suggest that representations for space and concepts can emerge from a shared mechanism in the human brain.
1 INTRODUCTION
In the brain, grid cells in the entorhinal cortex (EC) represent the space by grid-like representations (Hafting et al., 2005; Doeller et al., 2010; Jacobs et al., 2013). This neural representation is often related to vector-based spatial navigation because grid cells provide global metric over the space. Theoretically, an animal can estimate the direction to a goal when representations of a current position and a goal position are given (Fiete et al., 2008; Bush et al., 2015). Furthermore, self-position can be estimated by integrating self-motions when sensory information is not available (McNaughton et al., 2006). These functions are the basis of robust spatial navigation by animals.
There are not only spatial but also conceptual representations in EC. Neurons called as “concept cells” have been found in human medial temporal lobe including EC (Quiroga, 2012; Reber et al., 2019). Concept cells respond to specific concepts, namely, stimuli related to a specific person, a famous place, or a specific category like “foods” and “clothes”. Furthermore, recent experiments also suggest that grid-like representations appear not only for physical space but also for conceptual space if there is a 2-dimensional structure (e.g. lengths of a neck and legs, intensity of two odors), and those representations are the basis of vector-based conceptual inference (Bao et al., 2019; Constantinescu et al., 2016; Park et al., 2021). Thus, it is expected that there is a shared processing mechanism for physical and conceptual spaces in EC. Existence of shared neural mechanism may also explain why humans use sense of physical space (such as directionality) to communicate abstract concepts (conceptual metaphor (Lakoff & Johnson, 1980)). However, a principle behind such universal computation in the brain is still unclear.
In this paper, we propose a representation model which we call disentangled successor information (DSI) model. DSI is an extension of successor representation (SR), which stems from a theory of reinforcement learning and became one of promising computational models of the hippocampus and EC (Dayan, 1993; Stachenfeld et al., 2017; Momennejad et al., 2017; Momennejad, 2020). Like eigenvectors of SR, DSI forms grid-like codes in a 2-D space, and those representations support vector-based spatial navigation because DSI approximates value functions for navigation in the framework of linear reinforcement learning (Todorov, 2006; 2009; Piray & Daw, 2021). Remarkably, when we apply DSI to text data by regarding a sequence of words as a sequence of states, DSI forms concept-specific representations like concept cells. Furthermore, we show mathematical correspondence between DSI and word embedding models in natural language processing (NLP)
(Mikolov et al., 2013a;b; Pennington et al., 2014; Levy & Goldberg, 2014), thus we can perform intuitive vector-based conceptual inference as in those models. Our model reveals a new theoretical relationship between spatial and linguistic representation learning, and suggests a hypothesis that there is a shared computational principle behind grid-like and concept-specific representations in the hippocampal system.
2 CONTRIBUTIONS AND RELATED WORKS
We summarize contributions of this work as follows. (1) We extended SR to successor information (SI), by which we theoretically connected reinforcement learning and word embedding, thus spatial navigation and conceptual inference. (2) We found that dimension reduction with constraints for grid-like representations (decorrelative NMF) generates disentangled word vectors with conceptspecific units, which has not been found previously. (3) Combining these results, we demonstrated that a computational model for grid cells can be extended to represent and compute linguistic concepts in an intuitive and biologically plausible manner, which has not been shown in previous studies.
Our model is an extension of successor representation (SR), which is recently viewed as a plausible model of hippocampus and EC (Dayan, 1993; Stachenfeld et al., 2017; Momennejad et al., 2017; Momennejad, 2020). Furthermore, default representation (DR), which is based on linear reinforcement learning theory, has been also proposed as a model of EC (Piray & Daw, 2021). We show that our model can extract linguistic concepts, which has not been shown for SR and DR. Furthermore, we demonstrate vector-based compositionality of words in our model, which expands the range of compositionality of EC representations (Piray & Daw, 2021) to semantic processing.
Our model produces biologically plausible grid-like representations in 2-D space, which supports spatial navigation. Previous studies have revealed that non-negative and orthogonal constraints are important to obtain realistic grid-like representations (Dordek et al., 2016; Sorscher et al., 2019). Furthermore, recurrent neural networks form grid-like representations through learning path integration, and those representations support efficient spatial navigation (Banino et al., 2018; Cueva & Wei, 2018; Gao et al., 2019). Some of those models have reproduced experimentally observed scaling ratios between grid cell modules (Banino et al., 2018; Sorscher et al., 2019). However, previous models have not been applied to learning of linguistic concepts, or other complex conceptual spaces in real-world data. Whittington et al. (2020) proposed a unified model for spatial and nonspatial cognition. However, their model was applied only to simple graph structures and conceptual specificity like our model was not observed.
Analogical inference by our model is a same function as word embedding methods in NLP (Mikolov et al., 2013a;b; Pennington et al., 2014; Levy & Goldberg, 2014). However, a unique feature of DSI representations is that each dimension of vectors corresponds to a specific concept like concept cells in the human brain (Quiroga, 2012; Reber et al., 2019). Our model provides biological plausible interpretation of word embedding: each word is represented by combination of disentangled conceptual units, inference is recombination of those concepts, and such representations emerge through the same constraints with grid cells. It was recently shown that transformer-based models (Vaswani et al., 2017; Brown et al., 2020), which are currently state-of-the-art models in NLP, generate grid-like representations when applied to spatial learning (Whittington et al., 2022). Similarly to our model, this finding implies the relationship between spatial and linguistic processing in the brain. However, concept-specific representations has not been found in such model. Furthermore, clear theoretical interpretation in this study depends on the analytical solution for skip-gram (Levy & Goldberg, 2014). Such analytical solution is currently unknown for transformer-based models.
3 MODEL
3.1 DISENTANGLED SUCCESSOR INFORMATION
Let us assume Ns discrete states exist in the environment. Successor representation (SR) between two states s and s′ is defined as
SR(s, s′) = E [ ∞∑ t=0 γtδ(st, s ′)|s0 = s ] = ∞∑ t=0 γtP (st = s ′|s0 = s), (1)
where δ(i, j) is Kronecker’s delta and γ is a discount factor. We describe how we calculate SR in this study in Appendix A.1. SR and its dimension-reduced representations have been viewed as models of hippocampus and entorhinal cortex, respectively (Stachenfeld et al., 2017).
Based on SR, we define successor information (SI) and positive successor information (PSI) as
SI(s, s′) = log(SR(s, s′))− log(P (s′)), (2) PSI(s, s′) = max{SI(s, s′), 0}. (3)
In this study, we regard this quantity as a hippocampal model instead of SR (Figure 1A).
Next, we introduce a novel dimension reduction method which we call decorrelative non-negative matrix factorization (decorrelative NMF). Decorrelative NMF can be regarded as a variant of NMF (Lee & Seung, 1999) with additional constraints of decorrelation. By applying decorrelatve NMF to PSI, we obtain representation vectors called as disentangled successor information (DSI), which we regard as a model of EC (Figure 1A). In decorrelative NMF, we obtain D-dimensional vectors x(s) and w(s) (D < Ns) by minimization of the following objective function
J = 1
2 ∑ s,s′ ρ(s, s′)(PSI(s, s′)− x(s) ·w(s′))2
+ 1
2 βcor ∑ i ̸=j (Corr(i, j))2 + 1 2 βreg ∑ s (||x(s)||2 + ||w(s)||2), (4)
subject to non-negative constraints ∀i, xi(s) ≥ 0, wi(s) ≥ 0. ρ(s, s′) is a weight for the square error
ρ(s, s′) = 1
NsV
( 1
M PSI(s, s′) + ρmin
) , (5)
where M and V are mean and variance of PSI, respectively, and ρmin is a small value to avoid zero-weight. Corr(i, j) is a correlation between two dimensions in x(s)
Corr(i, j) = ∑ s x̃i(s)x̃j(s)√∑
s(x̃i(s)) 2 ∑ s(x̃j(s)) 2 , (6)
where x̃i(s) = xi(s) − 1Ns ∑
s xi(s). The first term of the objective function is weighted approximation error minimization, the second term works for decorrelation between dimensions, and the third term regularizes representation vectors. Optimization was performed by Nesterov’s accelerated gradient descent method (Nesterov, 1983) with rectification of xi(s), wi(s) every iteration. We describe additional details in Appendix A.2.
3.2 RELATIONSHIPS WITH REINFORCEMENT LEARNING AND WORD EMBEDDING
We show dual interpretation of our model. On the one hand, DSI approximates value estimation of linear reinforcement learning, thus support goal-directed decision making and navigation. On the
other hand, the same representation approximates word embedding in NLP, thus support semantic computation.
First, our model approximates value functions of linear reinforcement learning (Todorov, 2006; 2009; Piray & Daw, 2021) in the setting of spatial navigation. Linear reinforcement learning assumes default policy and imposes additional penalty on deviation from default policy, then we can obtain value functions explicitly by solving linear equations. Let us consider a specific condition in which the environment consists of non-terminal states, and a virtual terminal state is attached to a goal state sG arbitrarily chosen from non-terminal states (Figure 1B). When the agent gets to the goal, it transits to the terminal state with a probability pNT . Furthermore, we assume that reward at non-terminal states are uniformly negative and reward at the terminal state is positive so that the agent has to take a short path to goal to maximize reward. In this setting, we can obtain value functions v∗(s) in linear reinforcement learning as
λ−1v∗(s) = log(SRd(s, sG))− logP d(sG) = SId(s, sG) ≈ x(s) ·w(sG), (7)
where SRd(s, sG) and SId(s, sG) are SR and SI under the default policy, respectively. We describe details of derivation in Appendix A.3. Therefore, SI is proportional to value functions for spatial navigation and inner products of DSI vectors approximates value functions. Based on this interpretation, we basically regard x(s) as a representation of each state, and w(s) represents a temporary goal.
Second, DSI is related to word embedding methods in NLP (Mikolov et al., 2013a;b; Pennington et al., 2014; Levy & Goldberg, 2014). In linguistics, pointwise mutual information (PMI) and positive pointwise mutual information (PPMI) are used to measure the degree of coincidence between two words (Levy & Goldberg, 2014). They are defined as
PMI = log
( P (wordi, wordj)
P (wordi)P (wordj)
) , (8)
PPMI = max{PMI, 0}, (9)
where P (wordi, wordj) is a coincidence probability of two words (in a certain temporal window). It has been proven that dimension reduction of PMI approximates a word embedding method skipgram (Mikolov et al., 2013a;b), and similar performance is obtained using PPMI (Levy & Goldberg, 2014). GloVe (Pennington et al., 2014) is also based on this perspective. SI can be written as
SI(s, s′) = log(SR(s, s′))− log(P (s′)) = log (∑∞ t=0 γ tP (st = s ′, s0 = s)
P (s)P (s′)
) . (10)
In this formulation, we can see mathematical similarity between PMI and SI by regarding words as states (s = wordi, s′ = wordj), thus the correspondence between PPMI and PSI. Because of this relationship, we can expect that DSI, which is obtained through dimension reduction of PSI, has similar properties to word embedding methods. The difference is how to count coincidence: the coincidence in SI is evaluated with an asymmetric exponential kernel as in SR, in contrast that a symmetric rectangular temporal window is often used in typical word embedding (see Appendix A.4 for further detail).
3.3 DECORRELATIVE NMF RELATES TO GRID CELLS AND DISENTANGLEMENT
Constraints in decorrelative NMF (non-negativity, decorrelation (or orthogonality), and regularization) are important for generation of grid cells, as shown in previous theoretical studies on grid cells. (Dordek et al., 2016; Cueva & Wei, 2018; Banino et al., 2018; Gao et al., 2019; Sorscher et al., 2019). They are also biologically plausible because neural activity is basically non-negative and decorrelation is possible through lateral inhibition. On the other hand, non-negativity (Oja & Plumbley, 2004) and decorrelation (Hyvärinen & Oja, 2000) are also important for extraction of independent components, and it is known that imposing independence in latent space of deep generative models results in the emergence of disentangled representations for visual features (Higgins et al., 2017). Therefore, in word embedding, we expected that those constraints help emergence of independent and disentangled units for linguistic concepts. Constraints in decorrelative NMF are actually crucial for results obtained in this study (Appendix A.5).
As disentangled visual representations explain single-cell activities in higher-order visual cortex (Higgins et al., 2021), we may similarly interpret conceptual representations in our model as concept cells in the human medial temporal lobe (Quiroga, 2012). Previous studies suggest that each concept cell respond to a specific concept, whereas population-level activity patterns represent abstract semantic structures (Reber et al., 2019). Such property is consistent with the factorized and distributed nature of disentangled representation vectors.
4 LEARNING REPRESENTATIONS OF PHYSICAL SPACES
In this section, we empirically show that DSI model forms biologically plausible grid-like representations in a 2-D physical space, and they support spatial navigation. These results also apply to conceptual spaces with the 2-D structure, depending on the definition of states.
4.1 LEARNING PROCEDURE
As an environment, we assumed a square room tiled with 30×30 discrete states (Figure 2A). In each simulation trial, an agent starts at one of those 900 states and transits to one of eight surrounding states each time except that transitions are limited at states along the boundary (the structure was not a torus). Transitions to all directions occur with an equal probability. We performed 500 simulation trials and obtained a sequence of 100,000 time steps in each trial. We calculated occurrence probabilities (P (s)) and a successor representation matrix (SR(s, s′)) of 900 states from those sequences, and calculated PSI and DSI (100-dimensional) as described in the Model section. The discount factor γ was set to 0.99. We additionally tested spatial navigation in a structure with separated and interconnected rooms (see Figure 3C). In that case, we used the discount factor γ = 0.999.
4.2 EMERGENCE OF GRID-LIKE REPRESENTATIONS
Here we call each dimension of DSI representation vectors x(s) as a neural ”unit”, and we regard a value in each dimension at each state as a neural activity (or a neural representation). As shown in Figure 2B, many units exhibited grid-like activity patterns in the space. We performed a gridness analysis that has been used in animal experiments (Sargolini et al., 2006) and found that 51% of units were classified as grid cells. Similarly, 53% of units in w(s) were classified as grid cells.
Furthermore, we checked whether DSI representations in the physical space reproduce a property of biological grid cells. Actual grid cells in the rat brain exhibit multiple discrete spatial scales and the ratio between grid scales of adjacent modules is √ 2 (Stensola et al., 2012). We constructed a distribution of grid scales of DSI units by kernel density estimation, which revealed that multiple peaks of grid scales existed and the ratio between grid scales of adjacent peaks was √ 2 (Figure 2C). These results show that DSI model constructs biologically plausible grid-like representations in the 2-D physical space. We describe details of analysis methods in Appendix A.6.
4.3 NEAR-OPTIMAL SPATIAL NAVIGATION BY DSI VECTORS
As discussed in Section 3.2, the inner product of DSI representations approximate value functions for spatial navigation. Therefore, we tested whether DSI representations actually enable nearoptimal navigation in the space.
We assume that a start location (state sinit) and a goal location (state sG) are randomly given in each trial such that the shortest path length is minimally 10, and an agent has to navigate between them. To solve the task, we define a vector-based state transition rule. Suppose that the agent exists at a state s, and a set of neighboring states of s is A(s). Given the goal representation vector w(sG), a value function of a neighboring state snext ∈ A(s) is estimated by x(snext) · w(sG), and the agents transits to the state that has a maximum value. This state transition rule can be geometrically interpreted as the choice of movements that has the closest angle with the goal vector in the representation space (Figure 3A). Otherwise, we can interpret that the agent estimates value functions by linear readout from grid-like DSI representations. Because of the approximation error, this rule did not always give optimal navigation (the shortest path from the start to the goal). However, the agent could take the shortest path to the goal in 93.9% of 1,000 trials we tested (an example is shown in
(
√
2)n
, respectively.
Figure 3B). Furthermore, 97.2% were near-optimal navigation in which the actual path length was shorter than 1.1 times the shortest path length. The same framework also worked in a relatively complex environment with separated rooms (Figure 3C). In this environment, the ratio of optimal and near-optimal navigation was 68% and 82.6%, respectively. We also confirmed that we can perform path integration based on DSI representations using movement-conditional recurrent weights (McNaughton et al., 2006; Burak & Fiete, 2009; Oh et al., 2015; Gao et al., 2019) (Appendix A.7). These results show that DSI representations can support spatial navigation, which corresponds to the contribution of biological grid cells for spatial navigation.
4.4 VECTOR-BASED INFERENCE OF SPATIAL CONTEXTS
We additionally found that we can perform vector-based inference for spatial navigation in a novel context. First, we constructed DSI representation vectors in spatial contexts A and B, each of which has a barrier (Figure 4A). Then, we created representation vectors for a novel context A+B with two barriers by simply adding representation vectors for familiar contexts A and B (Figure 4A). We tested vector-based navigation (described in the section 4.3) in three spatial contexts A, B, and A+B, using one of three representations for A, B, and A+B. Naturally, representation vectors for A and B gave the best performance in contexts A and B, respectively (Figure 4B). Notably, composite representation vectors for A+B achieved the best performance in the context A+B (Figure 4B). This
result suggests that we can utilize vector-based composition of representations for a novel spatial context. We describe details of the simulation in Appendix A.8.
Additional analysis by multidimensional scaling (MDS) suggest that summing DSI vectors leads to composition of an appropriate metric space for the novel context (Appendix A.9). This is potentially useful for composing multiple constraints that change reachability between states in various tasks (such as control of robotic arms and playing computer games), like composition of tasks in soft-Q learning (Haarnoja et al., 2018; Makino).
5 LEARNING REPRESENTATIONS OF CONCEPTUAL SPACES
In this section, we show that the same DSI model can learn representations for a complex conceptual space from linguistic inputs, and those representations support vector-based conceptual inference.
5.1 LEARNING PROCEDURE
We used text data taken from English Wikipedia, which contains 124M tokens and 9376 words (see Appendix A.10 for the detail of preprocessing). To construct DSI representations, we regarded each word as a “state”, and considered the text data as a sequence of 9376 states (Ns = 9376). Then, we applied the exactly same learning procedure as in the experiment of physical spaces. We obtained 300-dimensional DSI representation vectors for each word. The discount factor γ was set to 0.9. The setting of other parameters was the same as the experiment of physical spaces.
5.2 EMERGENCE OF CONCEPT-SPECIFIC REPRESENTATIONS
As in the previous section, we regard each dimension of representation vectors as a neural unit, and checked how various words activate those units. Specifically, we listed ten words that elicited the highest activities in each unit (TOP-10 words). Consequently, we found that many units are activated by words related to specific concepts (Figure 5; other examples in Appendix A.12), which could be named as “game cell” or “president cell”, for example. We quantified this conceptual specificity through WordNet-based semantic similarity between words (Princeton University, 2010). We compared mean similarity among TOP-10 words and a null distribution of similarity between random word pairs, by which we determined statistically significant concept-specific units and quantified the degree of conceptual specificity of each unit (see Appendix A.11 for details). DSI exhibited the larger number of significantly concept-specific units and higher average conceptual specificity than other well-established word embedding methods such as skip-gram and GloVe (Table 1) (Mikolov et al., 2013a;b; Pennington et al., 2014; Levy & Goldberg, 2014). We also analyzed conceptual specificity of representations in the embedding layer of pretrained BERT model (bert-base-uncased in Hugging Face transformers) (Devlin et al., 2018; Wolf et al., 2020), which was lower than DSI (Table1). This result shows that our DSI model forms more concept-specific representations than other models.
Additional analyses revealed that word representation vectors are non-sparse and distributed (Appendix A.15). Therefore, each word is represented by the combination of concept-specific units
METHOD EVALUATED SIGNIFICANT RATIO SPECIFICITY
shared by several related words. For example, ”France” can be represented by the combination of units which we could name French cell and country cell (Appendix A.15).
5.3 VECTOR-BASED COMPUTATION IN THE CONCEPTUAL SPACE
Given that DSI and word embedding methods are mathematically similar (Section A.4), we expect that DSI vectors have similar properties to representation vectors learned by those word embedding methods. We evaluated the performance of DSI vectors in two tasks that have been used to evaluate word embedding methods: word similarity and analogical inference (Mikolov et al., 2013a;b; Pennington et al., 2014; Levy & Goldberg, 2014). In the word similarity task, we calculated cosine similarity between representation vectors of word pairs, and evaluated the rank correlation between those cosine similarities and human word similarities (WS353 dataset (Agirre et al., 2009); 248/345 word pairs were used). In the analogical inference task, we performed calculation of vectors such as x(king) − x(man) + x(woman) and checked whether the resultant vector has the maximum cosine similarity with x(queen) (Mikolov’s dataset (Mikolov et al., 2013a;b); 3157/19544 questions were used; examples in Appendix A.13). The result shows that DSI vectors achieved comparable performance with other well-established word embedding methods (Table 2).
This result indicates that similarity of DSI representation vectors corresponds to semantic similarity. This property is consistent with the experimental observation that population-level pattern similarity of concept cell activities represents semantic categories (Reber et al., 2019). By visualizing the structure of DSI representations by MDS, we can actually see clustering of words corresponding to 10 semantic categories used in Reber et al. (2019) (Appendix A.14). Furthermore, conceptual inference is possible through arithmetic composition of DSI vectors.We additionally found that this inference is intuitive recombination of concept-specific units in some cases. For example, transformation from ”Paris” to ”France” corresponds to activation of country cell and deactivation of capital cell, which is possible by summing the difference of Germany and Berlin vectors. (Appendix A.15).
METHOD SIMILARITY ANALOGY
6 DISCUSSION
In this paper, we proposed a theoretically interpretable and biologically plausible neural representation model for physical and conceptual spaces. We demonstrated that our DSI model forms grid-like representations in the physical space and concept-specific representations in the linguistic space, which are assumed to correspond to neural representations in EC. Furthermore, we showed that SI is mathematically related to linear reinforcement learning and word embedding methods, thus DSI representations support spatial navigation and conceptual inference. These results suggest that we can extend the spatial representation model of EC to learn and compute linguistic concepts, which apparently seems a different computational domain from physical space.
In the section 5.2, we demonstrated concept-specific representations created from text data. To the best of our knowledge, such property has not been reported in any word embedding methods. However, we unexpectedly found that continuous-bag-of-words (CBOW) showed relatively high conceptual specificity. Although DSI is related to PMI, skip-gram and GloVe, we have not found relationship to CBOW. Further clarification of necessary conditions for conceptual specificity is still open problem.
Although DSI has clear mathematical interpretation, how biological neural networks can learn DSI is still unclear. A possible solution is an extension of skip-gram neural network with SR, nonnegativity, and decorrelation. Because SI corresponds to PMI which is the optimum of skip-gram neural network, we can expect that extended skip-gram network learns DSI. Building such model and relating it to the circuit mechanism in hippocampus and EC are left for future research.
Our model relates word embedding to conceptual representations in the brain. Previously study showed that skip-gram representations support high-performance decoding of semantic information from fMRI data (Nishida & Nishimoto, 2018). Another study revealed that hippocampal theta oscillation codes semantic distances between words measured in word2vec subspace (Solomon et al., 2019). These experimental results support our hypothesis. However, recent studies have shown that representations in transformer-based models (Vaswani et al., 2017) such as GPT (Brown et al., 2020) achieve remarkable performance in linear fitting to neural recording during linguistic processing (Goldstein et al., 2022; Schrimpf et al., 2021). A major difference between our DSI model and transformer-based models is that DSI representations are basically fixed (static embedding) whereas transformer-based models flexibly create context-dependent representations (dynamic embedding). Conceptual interpretation obviously depends on the context, thus activities of concept cells are context-dependent (Bausch et al., 2021). Therefore, our DSI model should be extended to process context-dependence, hopefully by combination with other models for learning contextdependent latent cognitive states (Uria et al., 2020; George et al., 2021; Whittington et al., 2020).
Another direction of future research is application to general conceptual spaces by learning DSI representations from low-level sensory inputs, like spatial learning from visual and auditory inputs in previous models (Banino et al., 2018; Taniguchi et al., 2018; Uria et al., 2020). It may be possible by learning discrete states by unsupervised clustering for deep networks (Caron et al., 2018). As for the human brain, infants probably form primitive spatial and conceptual representations from sensory signals, and later linguistic inputs enrich those representations. We speculate that real-world sensory data also contain the information of the conceptual space, for which DSI can be extended to learn those structures. Such model would clarify the role of the hippocampal system in computation of general conceptual spaces.
A APPENDIX
A.1 CALCULATION OF SR
SR is a variant of value functions in reinforcement learning, thus we can use various methods such as temporal-difference (TD) learning for the construction. Throughout this study, we used a direct count method because we performed only offline processing of finite data. In a sequence of states {s1, . . . , st, . . . , sT }, we recursively calculated exponential traces of past states z(s, t) = ∑t−1 τ=0 γ τδ(st−τ , s) as
z(s, t) = γz(s, t− 1) + δ(st, s), (11)
and calculated SR from state counts and coincidence counts as
SR(s, s′) =
∑T t=1 z(s, t)δ(st, s
′)∑T t=1 δ(st, s) . (12)
A.2 DETAILS OF DECORRELATIVE NMF
In decorrelative NMF, we iteratively updated vectors x(s) and w(s) by Nesterov’s accelerated gradient descent method to minimize the objective function (Eq. 4), rectifying all elements every iteration. Gradients are
∂J
∂xk(s) = − ∑ s′ ρ(s, s′)(PSI(s, s′)− x(s) ·w(s′))wk(s′)
+ βcor ∑ j ̸=k Corr(k, j)x̃j(s)√∑ s(x̃k(s)) 2 ∑ s(x̃j(s)) 2 + βregxk(s), (13)
∂J
∂wk(s′) = − ∑ s PSI(s, s′)(PSI(s, s′)− x(s) ·w(s′))xk(s) + βregwk(s′). (14)
We note that we regarded mean and variance of xi(s) in the correlation ( 1Ns ∑ s xi(s), ∑ s(x̃i(s)) 2 in Eq. 6) as constants in the calculation of these gradients. Practically, this heuristic did not affect the performance of decorrelation.
Throughout this paper, the learning rate was 0.05 and the number of iteration was 10000. Parameters were βcor = 1, βreg = 0.001, and ρmin = 0.001.
A.3 MATHEMATICAL RELATIONSHIP OF DSI AND REINFORCEMENT LEARNING
In this section, we show that our model approximates value functions of linear reinforcement learning (Todorov, 2006; 2009; Piray & Daw, 2021) in the setting of spatial navigation.
In linear reinforcement learning, an agent aims to maximize “gain” instead of reward. Assuming a default policy πd(s) (any policy is available; typically random walk in the case of exploration task), gain function is defined as
g(s) = r(s)− λKL(π(s)|πd(s)), (15)
where r(s) is expected reward at the state s and λKL(π(s)|πd(s)) is the cost imposed on the difference between the current policy π(s) and the default policy πd(s) (λ is a relative weight of the cost). Then, previous works have shown that the optimal policy and corresponding value functions can be determined explicitly by solving linear equations (Todorov, 2006; 2009; Piray & Daw, 2021). Here we consider an environment that consists of NN non-terminal states and NT terminal states. We define two transition probability matrices under the default policy: PNT is a NN × NT matrix for
transitions from non-terminal states to terminal states, and PNN is a NN×NN matrix for transitions across non-terminal states. Furthermore, rN and rT are vectors of rewards at non-terminal states and terminal states, respectively. In this condition, a vector of value functions under optimal policy v∗ = (v∗(s1), . . . , v ∗(sNN )) is obtained as
exp(λ−1v∗) = MPNT exp(λ −1rT ), (16)
where M = (diag(exp(−λ−1rN ))− PNN )−1 is DR (Piray & Daw, 2021). To relate v∗ to SI, we consider a specific condition in which the environment consists of non-terminal states, and a virtual terminal state is attached to a goal state sG arbitrarily chosen from non-terminal states (Figure 1B). When the agent gets to the goal, it transits to the terminal state with a probability pNT . Furthermore, we assume that reward at non-terminal states are uniformly negative and reward at the terminal state is positive so that the agent has to take a short path to goal to maximize reward. Specifically, we assume all elements of rN are λ log γ, and rT = −λ(log γ+log pNT +logP d(sG)) where γ is an arbitrary value in the range (0, 1), and P d(sG) is a probability of visiting the state sG under the default policy. Then, we obtain
exp(λ−1v∗) = 1
P d(sG) (I − γPNN )−1e(iG), (17)
where e(iG) = (0, . . . , 0, 1, 0, . . . , 0)T (iG is the index of the goal state). Because (I−γPNN )−1 is equivalent to a successor representation matrix with a discount factor γ (Dayan, 1993; Stachenfeld et al., 2017), we finally obtain
λ−1v∗(s) = log(SRd(s, sG))− logP d(sG) = SId(s, sG) ≈ x(s) ·w(sG), (18) where SRd(s, sG) and SId(s, sG) are SR and SI under the default policy, respectively. Thus, SI is proportional to value functions for spatial navigation and inner products of DSI vectors approximates value functions. Based on this interpretation, we basically regard x(s) as a representation of each state, and w(s) represents a temporary goal.
A.4 MATHEMATICAL RELATIONSHIP OF DSI AND WORD EMBEDDING
In this section, we discuss the relationship of SI and PMI (Levy & Goldberg, 2014) in detail. PMI is
PMI = log
( P (wordi, wordj)
P (wordi)P (wordj)
) , (19)
where P (wordi, wordj) is a coincidence probability of two words (in a certain temporal window).
To relate PMI to SI, we regard words as states: s = wordi, s′ = wordj . Furthermore, we consider a specific way to count coincidence probability. In typical word embedding, a finite symmetric rectangular window is often used:
P (s, s′) = W∑ t=0 P (st = s ′, s0 = s), (20)
where W is a window size. Here, we implicitly assumed that same state (word) is not repeated in the temporal window to guarantee that P(s, s’) is probability.
However, we may arbitrarily calculate coincidence for P (s, s′). Here we evaluate coincidence with an infinite asymmetric exponential kernel as in SR:
P (s, s′) = (1− γ) ∞∑ t=0 γtP (st = s ′, s0 = s). (21)
We introduced a normalization factor (1 − γ) to guarantee that P (s, s′) is less than one ((1 − γ) ∑∞ t=0 γ t = 1). Then, PMI becomes
PMI = log
( (1− γ) ∑∞ t=0 γ tP (st = s ′, s0 = s)
P (s)P (s′)
) (22)
= log(SR(s, s′))− log(P (s′)) + log(1− γ) (23) = SI(s, s′) + log(1− γ). (24)
If we perform dimension reduction, log(1 − γ) can be ignored because it is a constant. Therefore, we can interpret SI as a special case of PMI in our model.
A.5 RELATIONSHIP BETWEEN MODEL COMPONENTS AND REPRESENTATIONS
To clarify the contribution of each model component to the results in this study, we performed a “lesion study” in which we removed some components in DSI and repeated the same evaluation procedure in the main text. We summarize results in Table 3. First, we tested representations obtained by singular value decomposition of successor representation (SR-SVD), which was regarded as a model of grid cells in a previous study (Stachenfeld et al., 2017). DSI model exceeded SR-SVD in all aspects shown in this study. Next, we tested DSI model without decorrelation (βcor = 0) and DSI model without non-negativity (no rectification of representation vectors). Neither modification impaired the performance of navigation and inference, showing the importance of using SI for vector-based computations as theoretically expected. In contrast, removing decorrelation and non-negativity significantly impaired the emergence of grid-like units and concept-specific units, respectively. Thus, decorrelative NMF is crucial to obtain biologically plausible representations.
A.6 DETAILS OF EVALUATION OF GRID REPRESENTATIONS
In the section 4.2, we performed the gridness analysis following a previous experimental study (Sargolini et al., 2006). For each unit, we rotated the spatial autocorrelation map (Figure 2B, lower) and calculated correlations between the original and rotated maps. Gridness was defined as the difference between the lowest correlation at 60◦ and 120◦ and the highest correlation at 30◦, 90◦ and 150◦. A unit was classified as a grid cell when gridness exceeds zero.
In Figure 2C, we constructed a distribution of grid scales. Grid scales were determined as the median of distances between the central peak and the six closest peaks (vertices of inner hexagon) in the spatial autocorrelation map. The kernel function for kernel density estimation was Gaussian with a standard deviation 1.
A.7 PATH INTEGRATION BY DSI
We performed path integration based on DSI representations using movement-conditional recurrent weights. This strategy has been used in previous studies such as grid cell modeling (Gao et al., 2019) and action-conditional video prediction (Oh et al., 2015). This mechanism is also consistent with a conventional biological model for path integration in which head direction signals activate one of attractor networks specialized for different directional shifts of grid patterns (McNaughton et al., 2006; Burak & Fiete, 2009).
We made an estimate of the next representation vector x̂t+1 by linear transformation of the current representation vector x(st) as
x̂t+1 = M(at)x(st) (25)
where at represents a movement (one of eight directional movements in this study) and M(at) is movement-conditional recurrent weight matrix. Here, x(st) was a DSI representation vector, and we optimized the matrix M(at) by minimizing prediction error ||x(st+1) − M(at)x(st)||22 by stochastic gradient descent during random walk on the state transition graph (20 simulation trials of 100,000 time steps). After optimization, we set an initial state s0 and a sequence of movements {a0, a1, . . . , aT−1}, and performed path integration by recursive estimation x̂t+1 = M(at)x̂t. We determined a position at each time step by searching a state representation vector that has minimum Euclidian distance with the estimated vector (st = argmins ||x(s)− x̂t||2). As shown in Figure 6, this strategy gave accurate estimation of the spatial path from movement signals.
A.8 DETAILS OF VECTOR-BASED INFERENCE OF THE SPATIAL CONTEXT
In the section 4.4, we performed vector-based inference for spatial navigation in a novel context. Specifically, we define separated states in contexts A, B, A+B as sAi , s B i and s A+B i , where i is a positional index which indicates a same position in all contexts (i = 1, 2, · · · , 900). We constructed representation vectors x(sAi ), w(s A i ), x(s B i ), and w(s B i ) through direct experiences, then we created x(sA+Bi ) and w(s A+B i ) as
x(sA+Bi ) = x(s A i ) + x(s B i ), (26)
w(sA+Bi ) = w(s A i ) +w(s B i ). (27)
We performed spatial navigation in a given context using one of three representations {x(sAi ),w(sAi )}, {x(sBi ),w(sBi )}, and {x(s A+B i ),w(s A+B i )} for corresponding positions, following the rule described in the section 4.3. In figure 7, we show structures of state transition graphs for three contexts A, B, and A+B.
To learn representations in contexts A and B, we sampled sequences of {sAi }i=1,··· ,900 and {sBi }i=1,··· ,900 by random walk in context A and B. The procedure was basically same with the section 4 except that we increased the number of simulation trials from 500 to 1,000, and state transition to the same position in the other context occurred every 5,000 time steps (transition between sAi and s B i ). We added this transition to associate the same position in different contexts. It means that we assumed that the setting of barriers can change during the experience but this temporal association may be substituted by similarity of sensory inputs across contexts. From sampled sequences, we calculated PSI for all combinations of {sAi }i=1,··· ,900 and {sBi }i=1,··· ,900, and calculated 100- dimensional DSI vectors for 1,800 states by simultaneous compression of all states. The discount factor γ was set to 0.999.
A.9 VISUALIZATION OF SPATIAL STRUCTURES REPRESENTED BY DSI VECTORS
In Figure A.9, we visualized metric spaces defined by representation vectors for contexts A and B, and composite vectors for the context A+B, by using multidimensional scaling (MDS). This visualization clearly shows that DSI vectors for A and B capture structures of spatial contexts A and B, and adding those vectors yields appropriate metric space for novel context A+B.
A.10 DETAILS OF PREPROCESSING OF TEXT DATA
In the section 5, we used text data taken from English Wikipedia dump (enwiki-latest-pagesarticles, 22-May-2020). We first generated text files from raw data using wikiextrator (https: //github.com/attardi/wikiextractor). We tokenized texts by nltk Punkt sentense tokenizer, and randomly sampled 100,000 articles containing 1,000 tokens at minimum. We lowercased all characters and removed punctuation characters in the data. After that, we selected words that appeared more than 1,000 times in the data, and substituted all other rare words by <unk> symbol. Finally, we obtained data that contains 124M tokens and 9376 words.
A.11 DETAILS OF EVALUATION OF CONCEPTUAL SPECIFICITY
In the section 5.2, conceptual specificity of each unit was evaluated using WordNet database (Princeton University, 2010). In WordNet, a word belongs to several synsets (sets of cognitive synonyms), and semantic similarity of two synsets can be evaluated from the shortest path length between them in the WordNet structure (we used path similarity function in nltk library). We defined similarity of two words as the highest similarity among all combinations of synsets of those words. We calculated mean similarity of all combinations of TOP-10 words (ten words that highly activated the unit; Figure 5A) that are available in WordNet. We evaluated only units which had at least five TOP-10 words available in WordNet. Furthermore, we randomly generated 1,000 pairs of words available in WordNet, and generated a null distribution of similarity between words. We defined a significance threshold of similarity as a 95 percentile of the null distribution, and a unit was classified as a significantly concept-specific unit if mean similarity of TOP-10 words exceeded the threshold. Furthermore, we quantitatively defined a conceptual specificity of each unit as
sunit snull − 1, (28)
where sunit is mean similarity of TOP-10 words and snull is the mean of the null distribution. This quantity becomes zero if similarity between TOP-10 words is not different from random pairs, and becomes positive if TOP-10 words are semantically similar. This conceptual specificity was averaged over all evaluated units.
A.12 EXAMPLE DSI REPRESENTATIONS FOR WORDS
In the figure 9, we show TOP-10 words of DSI units without manual selection. We found several non-significant units exhibit conceptual specificity according to manual inspection (for example, unit4 may be named as university cell). This is probably because of the limitation of knowledge covered by WordNet. Therefore, we suppose that the current evaluation method tends to underestimate the number of concept-specific units. However, the comparison across models was fair because we used the same procedure and criteria for all models.
A.13 EXAMPLES OF THE ANALOGICAL INFERENCE TASK
In table 4, we show some examples of the analogical inference in Mikolov’s dataset. There is a relationship “WORD1 is to WORD2 as WORD3 is to WORD4”. Then, an expected relationship in the vector space is WORD2-WORD1=WORD4-WORD3. In this study, we performed inference of WORD4 by WORD3+WORD2-WORD1. We regarded an inference was correct if the actual vector of WORD4 had the largest cosine similarity to the inferred vector among all word representation vectors (except those for WORD1, WORD2, and WORD3). If the number of words is 10,000, a chance level of the correct answer rate is 0.01%. Therefore, the performance shown in this study (more than 50%) is far above the chance level.
A.14 CLUSTERING OF SEMANTIC CATEGORIES IN DSI SPACE
Figure 10 shows the structure of DSI word representations visualized by MDS. We arbitrarily chose words based on 10 semantic categories used in Reber et al. (2019). We used same dissimilarity metric with Reber et al. (2019) (1 - Pearson’s correlation coefficient).
A.15 INTUITIVE MECHANISM OF WORD REPRESENTATIONS BY DSI
In this section, we discuss how DSI vectors represent and compute words.
First, we analyzed the ratio of each element to the sum of all elements in DSI vectors. We found that even the largest element accounted for 5% of the sum of all elements on average. (Figure 11). This result shows that DSI vectors for words are non-sparse and distributed, thus each word is represented by the combination of multiple conceptual units.
Next, for further clarification, we inspected representations of an example set of words: France, Paris, Germany and Berlin. We can see there are two analogical relationships (country-capital and French-German relationships). We identified the most active units (TOP-2) in DSI vectors for those words, and listed TOP-10 words for identified units. As a result, we could see that “France” is represented by the combination of units that we could name as French cell and country cell, whereas
“Berlin” is represented by the combination of German cell and capital cell, and so on (Figure 12). This example also gives a simple interpretation of word similarity in DSI vector space. If words are similar, they share large number of active units, like the country cell shared by representations of France and Germany. Thus, semantic similarity between words increases cosine similarity between word vectors.
Furthermore, we also identified the largest elements (the largest absolute values) in the difference vectors between words, and found that they correspond to semantic difference between words (Figure 12). Thus, we can regard analogical inference by DSI vectors as recombination of conceptual units. For example, adding Germany-Berlin vector to Paris vector deactivate capital cell and activate country cell, which leads to the transformation of Paris into France.
Such property of the vector space is same as conventional word embedding methods, but unique feature of our model is that those analogical relationships are factorized into separated units. We speculate that constraints of decorrelative NMF are sufficient conditions to align each semantic factors to each axis of the word vector space, and the mechanism is probably related to how disentangled representations emerge in visual feature learning model (Higgins et al., 2017; Carbonneau et al., 2020). | 1. What is the main contribution of the paper regarding disentangled successor information and its connection to brain processes?
2. What are the strengths and weaknesses of the proposed approach compared to prior works?
3. How does the reviewer assess the relevance of the paper to the ICLR community?
4. What are the limitations of the method, such as the assumption of discrete states?
5. How does the reviewer evaluate the evidence provided in the paper for the use of representations like DSI in the brain?
6. What are the implications of conflating conceptual processing with language processing?
7. Can the method be extended to continuous states, and how would this affect its application to conceptual spaces with non-grid-like structures?
8. How does the reviewer view the comparison between the proposed technique and other methods, such as word embeddings from next word prediction models like GPT-3?
9. What is the significance of the statement that spatial and conceptual processing can be theoretically unified into a single vector-based computational principle, and how well is it supported by the results in the paper? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
This paper presents a technique called disentangled successor information that extends successor representations (SR) in RL and shows the representations learned by the proposed method resembles grid cells in the hippocampus. The authors apply their technique to spatial navigation and language modeling and show that 1) representations for spatial navigation look like grid cells 2) representations for words resemble concept cells in the brain with each feature more or less corresponding to a single concept. They take these results to argue that perceptual and conceptual procesing in the brain could be using a similar mechanism.
In more detail, given a set of discrete states, the proposed method first forms the successor matrix S (where Sij is the discounted probability of observing state sj given state si). Then it calculates an information metric called positive successor information from this matrix. Finally, a dimensionality reduction technique based on non-negative matrix factorization is used to extract lower dimensional state representations. This dimensionality reduction technique has an additional objective term to minimize the correlation between different state vectors. This helps to create a more disentangled state representation.
The authors show that this proposed technique is related to value functions in linear RL and also to word vectors obtained by skip-gram models.
Strengths And Weaknesses
Overall I think this is an interesting paper but I'm not sure if ICLR is the right venue for this work. This work mainly tries to establish a connection between their approach (disentangled successor information) and perceptual and conceptual processing in the brain. This is an interesting and valuable endeavor but it's unclear how relevant this would be to the ICLR community. To me it seems like a cogsci/neuro venue or even NeurIPS would be a better choice for this work.
In general, I think the paper is well written and easy to follow.
In terms of contributions, the additional step of using decorrelative NMF to get state representations seems novel. However, the connection of successor representations to value functions is well-known and the connection between DSI and value functions seem to follow from this, so the connection is not quite novel. Perhaps the connection to word representations in skip-gram models is novel but given that successor representations are essentially bi-gram models, this is not very surprising.
In terms of results, there is a nice evaluation of the technique on spatial navigation and on word processing but these seem rather limited. Finding that the learned representations resemble grid cells is interesting but not a very strong evidence since many other techniques also learn representations with grid-like receptive fields.
Similary for word embedding results, I didn't really find them very strong. The authors show that features in their learned representations are on average more concept specific than some alternative techniqes. Since the brain also seems to have concept specific cells, they take this as evidence for brain using a similar mechanism. However, as far as I know it is still not very clear to what extent the concept representations in the brain are like concept cells. There seems to be many cells that do not cleanly respond to a specific concept for example. Even if this was not an issue (and we knew brain used concept cells), it is still not very strong evidence because other techniques also learn representations with similar properties. In fact, one could also directly use one-hot vectors as word representations and these would be 100% concept specific.
Overall, I think it'd be great to have more evidence supporting the use of representations like DSI in the brain, by perhaps testing out other properties of these representations (like word similarity etc.).
One major issue with the paper for me is that it conflates conceptual processing with language processing. Language (and words) are only a part of conceptual processing. In fact, one can argue that concepts are more primary than words and language. Then I don't think you can take results with words and use these to make statements about conceptual processing in general. In a couple of places, the authors argue that because their technique can be applied to both spatial navigation and words, it suggests that perceptual processing and conceptual processing use the same mechanisms. However, for the reason I mentioned, this does not follow. Humans certainly have a conceptual understanding of object shape as well. Can the proposed technique also capture key properties of this conceptual space?
Related to this point, not all conceptual spaces have a 2D (grid-like structure), so it is unclear why this method should work for such conceptual spaces as well.
Also, the hypothesis that perceptual and conceptual processing might be using the same mechanisms is not novel. There is a long line of research in CogSci that makes this argument, and it'd be good to mention these (for example Lakoff's work).
Other points:
One limitation of the method is that is assumes discrete states. Can the method be extended to continuous states? It might be good to mention some ideas along these lines.
In 4.4., in what cases would we expect addition to work? If it is only the case where the room is an exact superposition of two previous rooms, then this seems too limited.
How about comparing against embedding from next word prediction models like GPT-3? Would these show low concept specificity?
In 5.3, many methods can do the same vector based computation in conceptual space (even ones not based on successor representations).
In related work, the authors say "spatial and conceptual processing can be theoretically unified into a single vector-based computational principle". This is a very strong statement not supported by the results in the paper.
Clarity, Quality, Novelty And Reproducibility
Overall, I think there are some novel ideas in the paper (like de-correlative NMF), and the writing is quite clear. However, the results don't really support the claims made by the authors. |
ICLR | Title
Unified neural representation model for physical and conceptual spaces
Abstract
The spatial processing system of the brain uses grid-like neural representations (grid cells) for supporting vector-based navigation. Experiments also suggest that neural representations for concepts (concept cells) exist in the human brain, and conceptual inference relies on navigation in conceptual spaces. We propose a unified model called “disentangled successor information (DSI)” that explains neural representations for physical space and linguistic concepts. DSI generates grid-like representations in a 2-dimensional space that highly resemble those observed in the brain. Moreover, the same model creates concept-specific representations from linguistic inputs, corresponding to concept cells. Mathematically, DSI vectors approximate value functions for navigation and word vectors obtained by word embedding methods, thus enabling both spatial navigation and conceptual inference based on vector-based calculation. Our results suggest that representations for space and concepts can emerge from a shared mechanism in the human brain.
1 INTRODUCTION
In the brain, grid cells in the entorhinal cortex (EC) represent the space by grid-like representations (Hafting et al., 2005; Doeller et al., 2010; Jacobs et al., 2013). This neural representation is often related to vector-based spatial navigation because grid cells provide global metric over the space. Theoretically, an animal can estimate the direction to a goal when representations of a current position and a goal position are given (Fiete et al., 2008; Bush et al., 2015). Furthermore, self-position can be estimated by integrating self-motions when sensory information is not available (McNaughton et al., 2006). These functions are the basis of robust spatial navigation by animals.
There are not only spatial but also conceptual representations in EC. Neurons called as “concept cells” have been found in human medial temporal lobe including EC (Quiroga, 2012; Reber et al., 2019). Concept cells respond to specific concepts, namely, stimuli related to a specific person, a famous place, or a specific category like “foods” and “clothes”. Furthermore, recent experiments also suggest that grid-like representations appear not only for physical space but also for conceptual space if there is a 2-dimensional structure (e.g. lengths of a neck and legs, intensity of two odors), and those representations are the basis of vector-based conceptual inference (Bao et al., 2019; Constantinescu et al., 2016; Park et al., 2021). Thus, it is expected that there is a shared processing mechanism for physical and conceptual spaces in EC. Existence of shared neural mechanism may also explain why humans use sense of physical space (such as directionality) to communicate abstract concepts (conceptual metaphor (Lakoff & Johnson, 1980)). However, a principle behind such universal computation in the brain is still unclear.
In this paper, we propose a representation model which we call disentangled successor information (DSI) model. DSI is an extension of successor representation (SR), which stems from a theory of reinforcement learning and became one of promising computational models of the hippocampus and EC (Dayan, 1993; Stachenfeld et al., 2017; Momennejad et al., 2017; Momennejad, 2020). Like eigenvectors of SR, DSI forms grid-like codes in a 2-D space, and those representations support vector-based spatial navigation because DSI approximates value functions for navigation in the framework of linear reinforcement learning (Todorov, 2006; 2009; Piray & Daw, 2021). Remarkably, when we apply DSI to text data by regarding a sequence of words as a sequence of states, DSI forms concept-specific representations like concept cells. Furthermore, we show mathematical correspondence between DSI and word embedding models in natural language processing (NLP)
(Mikolov et al., 2013a;b; Pennington et al., 2014; Levy & Goldberg, 2014), thus we can perform intuitive vector-based conceptual inference as in those models. Our model reveals a new theoretical relationship between spatial and linguistic representation learning, and suggests a hypothesis that there is a shared computational principle behind grid-like and concept-specific representations in the hippocampal system.
2 CONTRIBUTIONS AND RELATED WORKS
We summarize contributions of this work as follows. (1) We extended SR to successor information (SI), by which we theoretically connected reinforcement learning and word embedding, thus spatial navigation and conceptual inference. (2) We found that dimension reduction with constraints for grid-like representations (decorrelative NMF) generates disentangled word vectors with conceptspecific units, which has not been found previously. (3) Combining these results, we demonstrated that a computational model for grid cells can be extended to represent and compute linguistic concepts in an intuitive and biologically plausible manner, which has not been shown in previous studies.
Our model is an extension of successor representation (SR), which is recently viewed as a plausible model of hippocampus and EC (Dayan, 1993; Stachenfeld et al., 2017; Momennejad et al., 2017; Momennejad, 2020). Furthermore, default representation (DR), which is based on linear reinforcement learning theory, has been also proposed as a model of EC (Piray & Daw, 2021). We show that our model can extract linguistic concepts, which has not been shown for SR and DR. Furthermore, we demonstrate vector-based compositionality of words in our model, which expands the range of compositionality of EC representations (Piray & Daw, 2021) to semantic processing.
Our model produces biologically plausible grid-like representations in 2-D space, which supports spatial navigation. Previous studies have revealed that non-negative and orthogonal constraints are important to obtain realistic grid-like representations (Dordek et al., 2016; Sorscher et al., 2019). Furthermore, recurrent neural networks form grid-like representations through learning path integration, and those representations support efficient spatial navigation (Banino et al., 2018; Cueva & Wei, 2018; Gao et al., 2019). Some of those models have reproduced experimentally observed scaling ratios between grid cell modules (Banino et al., 2018; Sorscher et al., 2019). However, previous models have not been applied to learning of linguistic concepts, or other complex conceptual spaces in real-world data. Whittington et al. (2020) proposed a unified model for spatial and nonspatial cognition. However, their model was applied only to simple graph structures and conceptual specificity like our model was not observed.
Analogical inference by our model is a same function as word embedding methods in NLP (Mikolov et al., 2013a;b; Pennington et al., 2014; Levy & Goldberg, 2014). However, a unique feature of DSI representations is that each dimension of vectors corresponds to a specific concept like concept cells in the human brain (Quiroga, 2012; Reber et al., 2019). Our model provides biological plausible interpretation of word embedding: each word is represented by combination of disentangled conceptual units, inference is recombination of those concepts, and such representations emerge through the same constraints with grid cells. It was recently shown that transformer-based models (Vaswani et al., 2017; Brown et al., 2020), which are currently state-of-the-art models in NLP, generate grid-like representations when applied to spatial learning (Whittington et al., 2022). Similarly to our model, this finding implies the relationship between spatial and linguistic processing in the brain. However, concept-specific representations has not been found in such model. Furthermore, clear theoretical interpretation in this study depends on the analytical solution for skip-gram (Levy & Goldberg, 2014). Such analytical solution is currently unknown for transformer-based models.
3 MODEL
3.1 DISENTANGLED SUCCESSOR INFORMATION
Let us assume Ns discrete states exist in the environment. Successor representation (SR) between two states s and s′ is defined as
SR(s, s′) = E [ ∞∑ t=0 γtδ(st, s ′)|s0 = s ] = ∞∑ t=0 γtP (st = s ′|s0 = s), (1)
where δ(i, j) is Kronecker’s delta and γ is a discount factor. We describe how we calculate SR in this study in Appendix A.1. SR and its dimension-reduced representations have been viewed as models of hippocampus and entorhinal cortex, respectively (Stachenfeld et al., 2017).
Based on SR, we define successor information (SI) and positive successor information (PSI) as
SI(s, s′) = log(SR(s, s′))− log(P (s′)), (2) PSI(s, s′) = max{SI(s, s′), 0}. (3)
In this study, we regard this quantity as a hippocampal model instead of SR (Figure 1A).
Next, we introduce a novel dimension reduction method which we call decorrelative non-negative matrix factorization (decorrelative NMF). Decorrelative NMF can be regarded as a variant of NMF (Lee & Seung, 1999) with additional constraints of decorrelation. By applying decorrelatve NMF to PSI, we obtain representation vectors called as disentangled successor information (DSI), which we regard as a model of EC (Figure 1A). In decorrelative NMF, we obtain D-dimensional vectors x(s) and w(s) (D < Ns) by minimization of the following objective function
J = 1
2 ∑ s,s′ ρ(s, s′)(PSI(s, s′)− x(s) ·w(s′))2
+ 1
2 βcor ∑ i ̸=j (Corr(i, j))2 + 1 2 βreg ∑ s (||x(s)||2 + ||w(s)||2), (4)
subject to non-negative constraints ∀i, xi(s) ≥ 0, wi(s) ≥ 0. ρ(s, s′) is a weight for the square error
ρ(s, s′) = 1
NsV
( 1
M PSI(s, s′) + ρmin
) , (5)
where M and V are mean and variance of PSI, respectively, and ρmin is a small value to avoid zero-weight. Corr(i, j) is a correlation between two dimensions in x(s)
Corr(i, j) = ∑ s x̃i(s)x̃j(s)√∑
s(x̃i(s)) 2 ∑ s(x̃j(s)) 2 , (6)
where x̃i(s) = xi(s) − 1Ns ∑
s xi(s). The first term of the objective function is weighted approximation error minimization, the second term works for decorrelation between dimensions, and the third term regularizes representation vectors. Optimization was performed by Nesterov’s accelerated gradient descent method (Nesterov, 1983) with rectification of xi(s), wi(s) every iteration. We describe additional details in Appendix A.2.
3.2 RELATIONSHIPS WITH REINFORCEMENT LEARNING AND WORD EMBEDDING
We show dual interpretation of our model. On the one hand, DSI approximates value estimation of linear reinforcement learning, thus support goal-directed decision making and navigation. On the
other hand, the same representation approximates word embedding in NLP, thus support semantic computation.
First, our model approximates value functions of linear reinforcement learning (Todorov, 2006; 2009; Piray & Daw, 2021) in the setting of spatial navigation. Linear reinforcement learning assumes default policy and imposes additional penalty on deviation from default policy, then we can obtain value functions explicitly by solving linear equations. Let us consider a specific condition in which the environment consists of non-terminal states, and a virtual terminal state is attached to a goal state sG arbitrarily chosen from non-terminal states (Figure 1B). When the agent gets to the goal, it transits to the terminal state with a probability pNT . Furthermore, we assume that reward at non-terminal states are uniformly negative and reward at the terminal state is positive so that the agent has to take a short path to goal to maximize reward. In this setting, we can obtain value functions v∗(s) in linear reinforcement learning as
λ−1v∗(s) = log(SRd(s, sG))− logP d(sG) = SId(s, sG) ≈ x(s) ·w(sG), (7)
where SRd(s, sG) and SId(s, sG) are SR and SI under the default policy, respectively. We describe details of derivation in Appendix A.3. Therefore, SI is proportional to value functions for spatial navigation and inner products of DSI vectors approximates value functions. Based on this interpretation, we basically regard x(s) as a representation of each state, and w(s) represents a temporary goal.
Second, DSI is related to word embedding methods in NLP (Mikolov et al., 2013a;b; Pennington et al., 2014; Levy & Goldberg, 2014). In linguistics, pointwise mutual information (PMI) and positive pointwise mutual information (PPMI) are used to measure the degree of coincidence between two words (Levy & Goldberg, 2014). They are defined as
PMI = log
( P (wordi, wordj)
P (wordi)P (wordj)
) , (8)
PPMI = max{PMI, 0}, (9)
where P (wordi, wordj) is a coincidence probability of two words (in a certain temporal window). It has been proven that dimension reduction of PMI approximates a word embedding method skipgram (Mikolov et al., 2013a;b), and similar performance is obtained using PPMI (Levy & Goldberg, 2014). GloVe (Pennington et al., 2014) is also based on this perspective. SI can be written as
SI(s, s′) = log(SR(s, s′))− log(P (s′)) = log (∑∞ t=0 γ tP (st = s ′, s0 = s)
P (s)P (s′)
) . (10)
In this formulation, we can see mathematical similarity between PMI and SI by regarding words as states (s = wordi, s′ = wordj), thus the correspondence between PPMI and PSI. Because of this relationship, we can expect that DSI, which is obtained through dimension reduction of PSI, has similar properties to word embedding methods. The difference is how to count coincidence: the coincidence in SI is evaluated with an asymmetric exponential kernel as in SR, in contrast that a symmetric rectangular temporal window is often used in typical word embedding (see Appendix A.4 for further detail).
3.3 DECORRELATIVE NMF RELATES TO GRID CELLS AND DISENTANGLEMENT
Constraints in decorrelative NMF (non-negativity, decorrelation (or orthogonality), and regularization) are important for generation of grid cells, as shown in previous theoretical studies on grid cells. (Dordek et al., 2016; Cueva & Wei, 2018; Banino et al., 2018; Gao et al., 2019; Sorscher et al., 2019). They are also biologically plausible because neural activity is basically non-negative and decorrelation is possible through lateral inhibition. On the other hand, non-negativity (Oja & Plumbley, 2004) and decorrelation (Hyvärinen & Oja, 2000) are also important for extraction of independent components, and it is known that imposing independence in latent space of deep generative models results in the emergence of disentangled representations for visual features (Higgins et al., 2017). Therefore, in word embedding, we expected that those constraints help emergence of independent and disentangled units for linguistic concepts. Constraints in decorrelative NMF are actually crucial for results obtained in this study (Appendix A.5).
As disentangled visual representations explain single-cell activities in higher-order visual cortex (Higgins et al., 2021), we may similarly interpret conceptual representations in our model as concept cells in the human medial temporal lobe (Quiroga, 2012). Previous studies suggest that each concept cell respond to a specific concept, whereas population-level activity patterns represent abstract semantic structures (Reber et al., 2019). Such property is consistent with the factorized and distributed nature of disentangled representation vectors.
4 LEARNING REPRESENTATIONS OF PHYSICAL SPACES
In this section, we empirically show that DSI model forms biologically plausible grid-like representations in a 2-D physical space, and they support spatial navigation. These results also apply to conceptual spaces with the 2-D structure, depending on the definition of states.
4.1 LEARNING PROCEDURE
As an environment, we assumed a square room tiled with 30×30 discrete states (Figure 2A). In each simulation trial, an agent starts at one of those 900 states and transits to one of eight surrounding states each time except that transitions are limited at states along the boundary (the structure was not a torus). Transitions to all directions occur with an equal probability. We performed 500 simulation trials and obtained a sequence of 100,000 time steps in each trial. We calculated occurrence probabilities (P (s)) and a successor representation matrix (SR(s, s′)) of 900 states from those sequences, and calculated PSI and DSI (100-dimensional) as described in the Model section. The discount factor γ was set to 0.99. We additionally tested spatial navigation in a structure with separated and interconnected rooms (see Figure 3C). In that case, we used the discount factor γ = 0.999.
4.2 EMERGENCE OF GRID-LIKE REPRESENTATIONS
Here we call each dimension of DSI representation vectors x(s) as a neural ”unit”, and we regard a value in each dimension at each state as a neural activity (or a neural representation). As shown in Figure 2B, many units exhibited grid-like activity patterns in the space. We performed a gridness analysis that has been used in animal experiments (Sargolini et al., 2006) and found that 51% of units were classified as grid cells. Similarly, 53% of units in w(s) were classified as grid cells.
Furthermore, we checked whether DSI representations in the physical space reproduce a property of biological grid cells. Actual grid cells in the rat brain exhibit multiple discrete spatial scales and the ratio between grid scales of adjacent modules is √ 2 (Stensola et al., 2012). We constructed a distribution of grid scales of DSI units by kernel density estimation, which revealed that multiple peaks of grid scales existed and the ratio between grid scales of adjacent peaks was √ 2 (Figure 2C). These results show that DSI model constructs biologically plausible grid-like representations in the 2-D physical space. We describe details of analysis methods in Appendix A.6.
4.3 NEAR-OPTIMAL SPATIAL NAVIGATION BY DSI VECTORS
As discussed in Section 3.2, the inner product of DSI representations approximate value functions for spatial navigation. Therefore, we tested whether DSI representations actually enable nearoptimal navigation in the space.
We assume that a start location (state sinit) and a goal location (state sG) are randomly given in each trial such that the shortest path length is minimally 10, and an agent has to navigate between them. To solve the task, we define a vector-based state transition rule. Suppose that the agent exists at a state s, and a set of neighboring states of s is A(s). Given the goal representation vector w(sG), a value function of a neighboring state snext ∈ A(s) is estimated by x(snext) · w(sG), and the agents transits to the state that has a maximum value. This state transition rule can be geometrically interpreted as the choice of movements that has the closest angle with the goal vector in the representation space (Figure 3A). Otherwise, we can interpret that the agent estimates value functions by linear readout from grid-like DSI representations. Because of the approximation error, this rule did not always give optimal navigation (the shortest path from the start to the goal). However, the agent could take the shortest path to the goal in 93.9% of 1,000 trials we tested (an example is shown in
(
√
2)n
, respectively.
Figure 3B). Furthermore, 97.2% were near-optimal navigation in which the actual path length was shorter than 1.1 times the shortest path length. The same framework also worked in a relatively complex environment with separated rooms (Figure 3C). In this environment, the ratio of optimal and near-optimal navigation was 68% and 82.6%, respectively. We also confirmed that we can perform path integration based on DSI representations using movement-conditional recurrent weights (McNaughton et al., 2006; Burak & Fiete, 2009; Oh et al., 2015; Gao et al., 2019) (Appendix A.7). These results show that DSI representations can support spatial navigation, which corresponds to the contribution of biological grid cells for spatial navigation.
4.4 VECTOR-BASED INFERENCE OF SPATIAL CONTEXTS
We additionally found that we can perform vector-based inference for spatial navigation in a novel context. First, we constructed DSI representation vectors in spatial contexts A and B, each of which has a barrier (Figure 4A). Then, we created representation vectors for a novel context A+B with two barriers by simply adding representation vectors for familiar contexts A and B (Figure 4A). We tested vector-based navigation (described in the section 4.3) in three spatial contexts A, B, and A+B, using one of three representations for A, B, and A+B. Naturally, representation vectors for A and B gave the best performance in contexts A and B, respectively (Figure 4B). Notably, composite representation vectors for A+B achieved the best performance in the context A+B (Figure 4B). This
result suggests that we can utilize vector-based composition of representations for a novel spatial context. We describe details of the simulation in Appendix A.8.
Additional analysis by multidimensional scaling (MDS) suggest that summing DSI vectors leads to composition of an appropriate metric space for the novel context (Appendix A.9). This is potentially useful for composing multiple constraints that change reachability between states in various tasks (such as control of robotic arms and playing computer games), like composition of tasks in soft-Q learning (Haarnoja et al., 2018; Makino).
5 LEARNING REPRESENTATIONS OF CONCEPTUAL SPACES
In this section, we show that the same DSI model can learn representations for a complex conceptual space from linguistic inputs, and those representations support vector-based conceptual inference.
5.1 LEARNING PROCEDURE
We used text data taken from English Wikipedia, which contains 124M tokens and 9376 words (see Appendix A.10 for the detail of preprocessing). To construct DSI representations, we regarded each word as a “state”, and considered the text data as a sequence of 9376 states (Ns = 9376). Then, we applied the exactly same learning procedure as in the experiment of physical spaces. We obtained 300-dimensional DSI representation vectors for each word. The discount factor γ was set to 0.9. The setting of other parameters was the same as the experiment of physical spaces.
5.2 EMERGENCE OF CONCEPT-SPECIFIC REPRESENTATIONS
As in the previous section, we regard each dimension of representation vectors as a neural unit, and checked how various words activate those units. Specifically, we listed ten words that elicited the highest activities in each unit (TOP-10 words). Consequently, we found that many units are activated by words related to specific concepts (Figure 5; other examples in Appendix A.12), which could be named as “game cell” or “president cell”, for example. We quantified this conceptual specificity through WordNet-based semantic similarity between words (Princeton University, 2010). We compared mean similarity among TOP-10 words and a null distribution of similarity between random word pairs, by which we determined statistically significant concept-specific units and quantified the degree of conceptual specificity of each unit (see Appendix A.11 for details). DSI exhibited the larger number of significantly concept-specific units and higher average conceptual specificity than other well-established word embedding methods such as skip-gram and GloVe (Table 1) (Mikolov et al., 2013a;b; Pennington et al., 2014; Levy & Goldberg, 2014). We also analyzed conceptual specificity of representations in the embedding layer of pretrained BERT model (bert-base-uncased in Hugging Face transformers) (Devlin et al., 2018; Wolf et al., 2020), which was lower than DSI (Table1). This result shows that our DSI model forms more concept-specific representations than other models.
Additional analyses revealed that word representation vectors are non-sparse and distributed (Appendix A.15). Therefore, each word is represented by the combination of concept-specific units
METHOD EVALUATED SIGNIFICANT RATIO SPECIFICITY
shared by several related words. For example, ”France” can be represented by the combination of units which we could name French cell and country cell (Appendix A.15).
5.3 VECTOR-BASED COMPUTATION IN THE CONCEPTUAL SPACE
Given that DSI and word embedding methods are mathematically similar (Section A.4), we expect that DSI vectors have similar properties to representation vectors learned by those word embedding methods. We evaluated the performance of DSI vectors in two tasks that have been used to evaluate word embedding methods: word similarity and analogical inference (Mikolov et al., 2013a;b; Pennington et al., 2014; Levy & Goldberg, 2014). In the word similarity task, we calculated cosine similarity between representation vectors of word pairs, and evaluated the rank correlation between those cosine similarities and human word similarities (WS353 dataset (Agirre et al., 2009); 248/345 word pairs were used). In the analogical inference task, we performed calculation of vectors such as x(king) − x(man) + x(woman) and checked whether the resultant vector has the maximum cosine similarity with x(queen) (Mikolov’s dataset (Mikolov et al., 2013a;b); 3157/19544 questions were used; examples in Appendix A.13). The result shows that DSI vectors achieved comparable performance with other well-established word embedding methods (Table 2).
This result indicates that similarity of DSI representation vectors corresponds to semantic similarity. This property is consistent with the experimental observation that population-level pattern similarity of concept cell activities represents semantic categories (Reber et al., 2019). By visualizing the structure of DSI representations by MDS, we can actually see clustering of words corresponding to 10 semantic categories used in Reber et al. (2019) (Appendix A.14). Furthermore, conceptual inference is possible through arithmetic composition of DSI vectors.We additionally found that this inference is intuitive recombination of concept-specific units in some cases. For example, transformation from ”Paris” to ”France” corresponds to activation of country cell and deactivation of capital cell, which is possible by summing the difference of Germany and Berlin vectors. (Appendix A.15).
METHOD SIMILARITY ANALOGY
6 DISCUSSION
In this paper, we proposed a theoretically interpretable and biologically plausible neural representation model for physical and conceptual spaces. We demonstrated that our DSI model forms grid-like representations in the physical space and concept-specific representations in the linguistic space, which are assumed to correspond to neural representations in EC. Furthermore, we showed that SI is mathematically related to linear reinforcement learning and word embedding methods, thus DSI representations support spatial navigation and conceptual inference. These results suggest that we can extend the spatial representation model of EC to learn and compute linguistic concepts, which apparently seems a different computational domain from physical space.
In the section 5.2, we demonstrated concept-specific representations created from text data. To the best of our knowledge, such property has not been reported in any word embedding methods. However, we unexpectedly found that continuous-bag-of-words (CBOW) showed relatively high conceptual specificity. Although DSI is related to PMI, skip-gram and GloVe, we have not found relationship to CBOW. Further clarification of necessary conditions for conceptual specificity is still open problem.
Although DSI has clear mathematical interpretation, how biological neural networks can learn DSI is still unclear. A possible solution is an extension of skip-gram neural network with SR, nonnegativity, and decorrelation. Because SI corresponds to PMI which is the optimum of skip-gram neural network, we can expect that extended skip-gram network learns DSI. Building such model and relating it to the circuit mechanism in hippocampus and EC are left for future research.
Our model relates word embedding to conceptual representations in the brain. Previously study showed that skip-gram representations support high-performance decoding of semantic information from fMRI data (Nishida & Nishimoto, 2018). Another study revealed that hippocampal theta oscillation codes semantic distances between words measured in word2vec subspace (Solomon et al., 2019). These experimental results support our hypothesis. However, recent studies have shown that representations in transformer-based models (Vaswani et al., 2017) such as GPT (Brown et al., 2020) achieve remarkable performance in linear fitting to neural recording during linguistic processing (Goldstein et al., 2022; Schrimpf et al., 2021). A major difference between our DSI model and transformer-based models is that DSI representations are basically fixed (static embedding) whereas transformer-based models flexibly create context-dependent representations (dynamic embedding). Conceptual interpretation obviously depends on the context, thus activities of concept cells are context-dependent (Bausch et al., 2021). Therefore, our DSI model should be extended to process context-dependence, hopefully by combination with other models for learning contextdependent latent cognitive states (Uria et al., 2020; George et al., 2021; Whittington et al., 2020).
Another direction of future research is application to general conceptual spaces by learning DSI representations from low-level sensory inputs, like spatial learning from visual and auditory inputs in previous models (Banino et al., 2018; Taniguchi et al., 2018; Uria et al., 2020). It may be possible by learning discrete states by unsupervised clustering for deep networks (Caron et al., 2018). As for the human brain, infants probably form primitive spatial and conceptual representations from sensory signals, and later linguistic inputs enrich those representations. We speculate that real-world sensory data also contain the information of the conceptual space, for which DSI can be extended to learn those structures. Such model would clarify the role of the hippocampal system in computation of general conceptual spaces.
A APPENDIX
A.1 CALCULATION OF SR
SR is a variant of value functions in reinforcement learning, thus we can use various methods such as temporal-difference (TD) learning for the construction. Throughout this study, we used a direct count method because we performed only offline processing of finite data. In a sequence of states {s1, . . . , st, . . . , sT }, we recursively calculated exponential traces of past states z(s, t) = ∑t−1 τ=0 γ τδ(st−τ , s) as
z(s, t) = γz(s, t− 1) + δ(st, s), (11)
and calculated SR from state counts and coincidence counts as
SR(s, s′) =
∑T t=1 z(s, t)δ(st, s
′)∑T t=1 δ(st, s) . (12)
A.2 DETAILS OF DECORRELATIVE NMF
In decorrelative NMF, we iteratively updated vectors x(s) and w(s) by Nesterov’s accelerated gradient descent method to minimize the objective function (Eq. 4), rectifying all elements every iteration. Gradients are
∂J
∂xk(s) = − ∑ s′ ρ(s, s′)(PSI(s, s′)− x(s) ·w(s′))wk(s′)
+ βcor ∑ j ̸=k Corr(k, j)x̃j(s)√∑ s(x̃k(s)) 2 ∑ s(x̃j(s)) 2 + βregxk(s), (13)
∂J
∂wk(s′) = − ∑ s PSI(s, s′)(PSI(s, s′)− x(s) ·w(s′))xk(s) + βregwk(s′). (14)
We note that we regarded mean and variance of xi(s) in the correlation ( 1Ns ∑ s xi(s), ∑ s(x̃i(s)) 2 in Eq. 6) as constants in the calculation of these gradients. Practically, this heuristic did not affect the performance of decorrelation.
Throughout this paper, the learning rate was 0.05 and the number of iteration was 10000. Parameters were βcor = 1, βreg = 0.001, and ρmin = 0.001.
A.3 MATHEMATICAL RELATIONSHIP OF DSI AND REINFORCEMENT LEARNING
In this section, we show that our model approximates value functions of linear reinforcement learning (Todorov, 2006; 2009; Piray & Daw, 2021) in the setting of spatial navigation.
In linear reinforcement learning, an agent aims to maximize “gain” instead of reward. Assuming a default policy πd(s) (any policy is available; typically random walk in the case of exploration task), gain function is defined as
g(s) = r(s)− λKL(π(s)|πd(s)), (15)
where r(s) is expected reward at the state s and λKL(π(s)|πd(s)) is the cost imposed on the difference between the current policy π(s) and the default policy πd(s) (λ is a relative weight of the cost). Then, previous works have shown that the optimal policy and corresponding value functions can be determined explicitly by solving linear equations (Todorov, 2006; 2009; Piray & Daw, 2021). Here we consider an environment that consists of NN non-terminal states and NT terminal states. We define two transition probability matrices under the default policy: PNT is a NN × NT matrix for
transitions from non-terminal states to terminal states, and PNN is a NN×NN matrix for transitions across non-terminal states. Furthermore, rN and rT are vectors of rewards at non-terminal states and terminal states, respectively. In this condition, a vector of value functions under optimal policy v∗ = (v∗(s1), . . . , v ∗(sNN )) is obtained as
exp(λ−1v∗) = MPNT exp(λ −1rT ), (16)
where M = (diag(exp(−λ−1rN ))− PNN )−1 is DR (Piray & Daw, 2021). To relate v∗ to SI, we consider a specific condition in which the environment consists of non-terminal states, and a virtual terminal state is attached to a goal state sG arbitrarily chosen from non-terminal states (Figure 1B). When the agent gets to the goal, it transits to the terminal state with a probability pNT . Furthermore, we assume that reward at non-terminal states are uniformly negative and reward at the terminal state is positive so that the agent has to take a short path to goal to maximize reward. Specifically, we assume all elements of rN are λ log γ, and rT = −λ(log γ+log pNT +logP d(sG)) where γ is an arbitrary value in the range (0, 1), and P d(sG) is a probability of visiting the state sG under the default policy. Then, we obtain
exp(λ−1v∗) = 1
P d(sG) (I − γPNN )−1e(iG), (17)
where e(iG) = (0, . . . , 0, 1, 0, . . . , 0)T (iG is the index of the goal state). Because (I−γPNN )−1 is equivalent to a successor representation matrix with a discount factor γ (Dayan, 1993; Stachenfeld et al., 2017), we finally obtain
λ−1v∗(s) = log(SRd(s, sG))− logP d(sG) = SId(s, sG) ≈ x(s) ·w(sG), (18) where SRd(s, sG) and SId(s, sG) are SR and SI under the default policy, respectively. Thus, SI is proportional to value functions for spatial navigation and inner products of DSI vectors approximates value functions. Based on this interpretation, we basically regard x(s) as a representation of each state, and w(s) represents a temporary goal.
A.4 MATHEMATICAL RELATIONSHIP OF DSI AND WORD EMBEDDING
In this section, we discuss the relationship of SI and PMI (Levy & Goldberg, 2014) in detail. PMI is
PMI = log
( P (wordi, wordj)
P (wordi)P (wordj)
) , (19)
where P (wordi, wordj) is a coincidence probability of two words (in a certain temporal window).
To relate PMI to SI, we regard words as states: s = wordi, s′ = wordj . Furthermore, we consider a specific way to count coincidence probability. In typical word embedding, a finite symmetric rectangular window is often used:
P (s, s′) = W∑ t=0 P (st = s ′, s0 = s), (20)
where W is a window size. Here, we implicitly assumed that same state (word) is not repeated in the temporal window to guarantee that P(s, s’) is probability.
However, we may arbitrarily calculate coincidence for P (s, s′). Here we evaluate coincidence with an infinite asymmetric exponential kernel as in SR:
P (s, s′) = (1− γ) ∞∑ t=0 γtP (st = s ′, s0 = s). (21)
We introduced a normalization factor (1 − γ) to guarantee that P (s, s′) is less than one ((1 − γ) ∑∞ t=0 γ t = 1). Then, PMI becomes
PMI = log
( (1− γ) ∑∞ t=0 γ tP (st = s ′, s0 = s)
P (s)P (s′)
) (22)
= log(SR(s, s′))− log(P (s′)) + log(1− γ) (23) = SI(s, s′) + log(1− γ). (24)
If we perform dimension reduction, log(1 − γ) can be ignored because it is a constant. Therefore, we can interpret SI as a special case of PMI in our model.
A.5 RELATIONSHIP BETWEEN MODEL COMPONENTS AND REPRESENTATIONS
To clarify the contribution of each model component to the results in this study, we performed a “lesion study” in which we removed some components in DSI and repeated the same evaluation procedure in the main text. We summarize results in Table 3. First, we tested representations obtained by singular value decomposition of successor representation (SR-SVD), which was regarded as a model of grid cells in a previous study (Stachenfeld et al., 2017). DSI model exceeded SR-SVD in all aspects shown in this study. Next, we tested DSI model without decorrelation (βcor = 0) and DSI model without non-negativity (no rectification of representation vectors). Neither modification impaired the performance of navigation and inference, showing the importance of using SI for vector-based computations as theoretically expected. In contrast, removing decorrelation and non-negativity significantly impaired the emergence of grid-like units and concept-specific units, respectively. Thus, decorrelative NMF is crucial to obtain biologically plausible representations.
A.6 DETAILS OF EVALUATION OF GRID REPRESENTATIONS
In the section 4.2, we performed the gridness analysis following a previous experimental study (Sargolini et al., 2006). For each unit, we rotated the spatial autocorrelation map (Figure 2B, lower) and calculated correlations between the original and rotated maps. Gridness was defined as the difference between the lowest correlation at 60◦ and 120◦ and the highest correlation at 30◦, 90◦ and 150◦. A unit was classified as a grid cell when gridness exceeds zero.
In Figure 2C, we constructed a distribution of grid scales. Grid scales were determined as the median of distances between the central peak and the six closest peaks (vertices of inner hexagon) in the spatial autocorrelation map. The kernel function for kernel density estimation was Gaussian with a standard deviation 1.
A.7 PATH INTEGRATION BY DSI
We performed path integration based on DSI representations using movement-conditional recurrent weights. This strategy has been used in previous studies such as grid cell modeling (Gao et al., 2019) and action-conditional video prediction (Oh et al., 2015). This mechanism is also consistent with a conventional biological model for path integration in which head direction signals activate one of attractor networks specialized for different directional shifts of grid patterns (McNaughton et al., 2006; Burak & Fiete, 2009).
We made an estimate of the next representation vector x̂t+1 by linear transformation of the current representation vector x(st) as
x̂t+1 = M(at)x(st) (25)
where at represents a movement (one of eight directional movements in this study) and M(at) is movement-conditional recurrent weight matrix. Here, x(st) was a DSI representation vector, and we optimized the matrix M(at) by minimizing prediction error ||x(st+1) − M(at)x(st)||22 by stochastic gradient descent during random walk on the state transition graph (20 simulation trials of 100,000 time steps). After optimization, we set an initial state s0 and a sequence of movements {a0, a1, . . . , aT−1}, and performed path integration by recursive estimation x̂t+1 = M(at)x̂t. We determined a position at each time step by searching a state representation vector that has minimum Euclidian distance with the estimated vector (st = argmins ||x(s)− x̂t||2). As shown in Figure 6, this strategy gave accurate estimation of the spatial path from movement signals.
A.8 DETAILS OF VECTOR-BASED INFERENCE OF THE SPATIAL CONTEXT
In the section 4.4, we performed vector-based inference for spatial navigation in a novel context. Specifically, we define separated states in contexts A, B, A+B as sAi , s B i and s A+B i , where i is a positional index which indicates a same position in all contexts (i = 1, 2, · · · , 900). We constructed representation vectors x(sAi ), w(s A i ), x(s B i ), and w(s B i ) through direct experiences, then we created x(sA+Bi ) and w(s A+B i ) as
x(sA+Bi ) = x(s A i ) + x(s B i ), (26)
w(sA+Bi ) = w(s A i ) +w(s B i ). (27)
We performed spatial navigation in a given context using one of three representations {x(sAi ),w(sAi )}, {x(sBi ),w(sBi )}, and {x(s A+B i ),w(s A+B i )} for corresponding positions, following the rule described in the section 4.3. In figure 7, we show structures of state transition graphs for three contexts A, B, and A+B.
To learn representations in contexts A and B, we sampled sequences of {sAi }i=1,··· ,900 and {sBi }i=1,··· ,900 by random walk in context A and B. The procedure was basically same with the section 4 except that we increased the number of simulation trials from 500 to 1,000, and state transition to the same position in the other context occurred every 5,000 time steps (transition between sAi and s B i ). We added this transition to associate the same position in different contexts. It means that we assumed that the setting of barriers can change during the experience but this temporal association may be substituted by similarity of sensory inputs across contexts. From sampled sequences, we calculated PSI for all combinations of {sAi }i=1,··· ,900 and {sBi }i=1,··· ,900, and calculated 100- dimensional DSI vectors for 1,800 states by simultaneous compression of all states. The discount factor γ was set to 0.999.
A.9 VISUALIZATION OF SPATIAL STRUCTURES REPRESENTED BY DSI VECTORS
In Figure A.9, we visualized metric spaces defined by representation vectors for contexts A and B, and composite vectors for the context A+B, by using multidimensional scaling (MDS). This visualization clearly shows that DSI vectors for A and B capture structures of spatial contexts A and B, and adding those vectors yields appropriate metric space for novel context A+B.
A.10 DETAILS OF PREPROCESSING OF TEXT DATA
In the section 5, we used text data taken from English Wikipedia dump (enwiki-latest-pagesarticles, 22-May-2020). We first generated text files from raw data using wikiextrator (https: //github.com/attardi/wikiextractor). We tokenized texts by nltk Punkt sentense tokenizer, and randomly sampled 100,000 articles containing 1,000 tokens at minimum. We lowercased all characters and removed punctuation characters in the data. After that, we selected words that appeared more than 1,000 times in the data, and substituted all other rare words by <unk> symbol. Finally, we obtained data that contains 124M tokens and 9376 words.
A.11 DETAILS OF EVALUATION OF CONCEPTUAL SPECIFICITY
In the section 5.2, conceptual specificity of each unit was evaluated using WordNet database (Princeton University, 2010). In WordNet, a word belongs to several synsets (sets of cognitive synonyms), and semantic similarity of two synsets can be evaluated from the shortest path length between them in the WordNet structure (we used path similarity function in nltk library). We defined similarity of two words as the highest similarity among all combinations of synsets of those words. We calculated mean similarity of all combinations of TOP-10 words (ten words that highly activated the unit; Figure 5A) that are available in WordNet. We evaluated only units which had at least five TOP-10 words available in WordNet. Furthermore, we randomly generated 1,000 pairs of words available in WordNet, and generated a null distribution of similarity between words. We defined a significance threshold of similarity as a 95 percentile of the null distribution, and a unit was classified as a significantly concept-specific unit if mean similarity of TOP-10 words exceeded the threshold. Furthermore, we quantitatively defined a conceptual specificity of each unit as
sunit snull − 1, (28)
where sunit is mean similarity of TOP-10 words and snull is the mean of the null distribution. This quantity becomes zero if similarity between TOP-10 words is not different from random pairs, and becomes positive if TOP-10 words are semantically similar. This conceptual specificity was averaged over all evaluated units.
A.12 EXAMPLE DSI REPRESENTATIONS FOR WORDS
In the figure 9, we show TOP-10 words of DSI units without manual selection. We found several non-significant units exhibit conceptual specificity according to manual inspection (for example, unit4 may be named as university cell). This is probably because of the limitation of knowledge covered by WordNet. Therefore, we suppose that the current evaluation method tends to underestimate the number of concept-specific units. However, the comparison across models was fair because we used the same procedure and criteria for all models.
A.13 EXAMPLES OF THE ANALOGICAL INFERENCE TASK
In table 4, we show some examples of the analogical inference in Mikolov’s dataset. There is a relationship “WORD1 is to WORD2 as WORD3 is to WORD4”. Then, an expected relationship in the vector space is WORD2-WORD1=WORD4-WORD3. In this study, we performed inference of WORD4 by WORD3+WORD2-WORD1. We regarded an inference was correct if the actual vector of WORD4 had the largest cosine similarity to the inferred vector among all word representation vectors (except those for WORD1, WORD2, and WORD3). If the number of words is 10,000, a chance level of the correct answer rate is 0.01%. Therefore, the performance shown in this study (more than 50%) is far above the chance level.
A.14 CLUSTERING OF SEMANTIC CATEGORIES IN DSI SPACE
Figure 10 shows the structure of DSI word representations visualized by MDS. We arbitrarily chose words based on 10 semantic categories used in Reber et al. (2019). We used same dissimilarity metric with Reber et al. (2019) (1 - Pearson’s correlation coefficient).
A.15 INTUITIVE MECHANISM OF WORD REPRESENTATIONS BY DSI
In this section, we discuss how DSI vectors represent and compute words.
First, we analyzed the ratio of each element to the sum of all elements in DSI vectors. We found that even the largest element accounted for 5% of the sum of all elements on average. (Figure 11). This result shows that DSI vectors for words are non-sparse and distributed, thus each word is represented by the combination of multiple conceptual units.
Next, for further clarification, we inspected representations of an example set of words: France, Paris, Germany and Berlin. We can see there are two analogical relationships (country-capital and French-German relationships). We identified the most active units (TOP-2) in DSI vectors for those words, and listed TOP-10 words for identified units. As a result, we could see that “France” is represented by the combination of units that we could name as French cell and country cell, whereas
“Berlin” is represented by the combination of German cell and capital cell, and so on (Figure 12). This example also gives a simple interpretation of word similarity in DSI vector space. If words are similar, they share large number of active units, like the country cell shared by representations of France and Germany. Thus, semantic similarity between words increases cosine similarity between word vectors.
Furthermore, we also identified the largest elements (the largest absolute values) in the difference vectors between words, and found that they correspond to semantic difference between words (Figure 12). Thus, we can regard analogical inference by DSI vectors as recombination of conceptual units. For example, adding Germany-Berlin vector to Paris vector deactivate capital cell and activate country cell, which leads to the transformation of Paris into France.
Such property of the vector space is same as conventional word embedding methods, but unique feature of our model is that those analogical relationships are factorized into separated units. We speculate that constraints of decorrelative NMF are sufficient conditions to align each semantic factors to each axis of the word vector space, and the mechanism is probably related to how disentangled representations emerge in visual feature learning model (Higgins et al., 2017; Carbonneau et al., 2020). | 1. What is the main contribution of the paper regarding hippocampal and entorhinal representation?
2. What are the strengths and weaknesses of the proposed method in handling physical and conceptual space?
3. Do you have any questions regarding the emergence of grid-like representations and the role of w(s)?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
The paper presents a model of hippocampal / entorhinal representation that handles both physical and conceptual space with a common mathematical formulation, drawing on successor representations from RL, and building on dimensionality reduction accounts of grid cells such as the work of Dordek et al. The authors propose a novel dimensionality reduction technique that generates a disentangled pair of vectors. In the case of physical space, the authors show that grid cell-like representations emerge with their method. The authors show that their representations support spatial navigation, where one of the two vectors represents the starting position and the other represents the goal position. The most striking finding of the paper is that the very same method can be used in a language setting to generate word embeddings. The resulting word embeddings exhibit desirable properties familiar from the literature such as conceptual clustering and the ability to support analogical inference through vector algebra.
Strengths And Weaknesses
Strengths and weaknesses
Strengths: The paper suggests a profound connection between two superficially different domains of representation - physical and conceptual space. As well as shedding light on the neuroscientific question of how the hippocampal formation functions, the authors’ method potentially provides a powerful method for use in ML. The method is well motivated, both scientifically and mathematically.
Weaknesses: I don’t see any important weaknesses. But I have some questions.
Q1. Section 4.2: Emergence of grid-like representations. You present results for x(s), but I was wondering whether grid-like representations emerge in w(s) too? In the objective function (Eq.6), x(s) and w(s) are treated almost symmetrically, but the correlation term only applies to x(s) not to w(s) (Eq.8). Is this constraint responsible for the grid-like representations?
Q2. Is there an intuitive meaning to w(s) in the word embedding case? It seems that all the results presented for conceptual space use x(s) only. (Is that correct?)
Q3. I’m a little unclear on how the optimisation process works. If x and w are functions that map state vectors to vectors with reduced dimensionality then how are they represented. My reading is they are represented in tabular fashion, i.e. x(s) and w(s’) are represented independently for all s and s’. Is that right? I assume x and s could be approximated by a neural net, but that isn’t the method used here. Is that correct?
Clarity, Quality, Novelty And Reproducibility
The paper is very clearly written, and novel. I'm fairly confident I could reproduce the results (but see my Q3 above). |
ICLR | Title
Optimized Gated Deep Learning Architectures for Sensor Fusion
Abstract
Sensor fusion is a key technology that integrates various sensory inputs to allow for robust decision making in many applications such as autonomous driving and robot control. Deep neural networks have been adopted for sensor fusion in a body of recent studies. Among these, the so-called netgated architecture was proposed, which has demonstrated improved performances over the conventional convolutional neural networks (CNN). In this paper, we address several limitations of the baseline negated architecture by proposing two further optimized architectures: a coarser-grained gated architecture employing (feature) group-level fusion weights and a two-stage gated architectures leveraging both the group-level and featurelevel fusion weights. Using driving mode prediction and human activity recognition datasets, we demonstrate the significant performance improvements brought by the proposed gated architectures and also their robustness in the presence of sensor noise and failures.
1 INTRODUCTION
Sensor fusion is an essential technology to autonomous systems such as self-driving cars and mobile robots. In advanced driver-assistance systems (ADAS), many sensors such as cameras, ultrasonic sensors, and LiDARs are utilized for enhanced safety and driving experience. Sensor fusion based vehicle and robot control technologies have been explored (Vargas-Meléndez et al., 2016; Jain et al., 2016; Garcia et al., 2017; Bohez et al., 2017; Patel et al., 2017). In addition, devices like smartphones and smartwatches typically integrate a number of sensors, making these devices a suitable platform for running sensor fusion applications such as activity recognition. Several sensor fusion techniques for activity recognition have been proposed (Yurtman & Barshan, 2017; Dehzangi et al., 2017; Zhao & Zhou, 2017; Gravina et al., 2017).
More specifically, in Bohez et al. (2017), a deep reinforcement learning based sensor fusion algorithm is discussed for robot control. A Kuka YouBot is used with multiple LiDAR sensors for simulations. In terms of sensor fusion, early fusion which concatenates all inputs as feature planes is compared with three other fusion techniques: concatenating convolution layer outputs, reducing the concatenated features with a 1x1 convolution, and accumulating convolution outputs. However, sensor noise and failures are not considered in this work. In addition, due to the fact that only the same type of sensory inputs is used in the experiments, the performance of sensor fusion based on different kinds of sensory inputs is unclear.
Among sensor fusion techniques employed in automobiles, Vargas-Meléndez et al. (2016) exploit neural networks with a Kalman filter for vehicle roll angle estimation, and show the advantage of using an inertial measurement unit (IMU) without additional suspension deflection sensors. Jain et al. (2016) consider a sensor-rich platform with a learning algorithm for maneuver prediction. Long short-term memory (LSTM) networks, which are a type of recurrent neural networks (RNN) are used with sensory inputs from cameras, GPS, and speedometers. Garcia et al. (2017) propose joint probabilistic data fusion for road environments. However, neither a multiplicity of sensory inputs nor sensory noise and failures are considered. The adopted architecture is simplistic where input data are only fused in one layer.
In the field of wearable devices, Yurtman & Barshan (2017) utilize early fusion, which concatenates sensory inputs. With this simple fusion approach, classical supervised learning methods such as Bayesian classifiers, k-nearest-neighbors, support vector machines, and artificial neural networks
are compared. Dehzangi et al. (2017) use deep convolutional neural networks with IMU data. Zhao & Zhou (2017) use a CNN with angle embedded gate dynamic images, which are pre-processed inputs for gait recognition. Gravina et al. (2017) summarize three fusion methods, namely, data, feature, and decision level fusion. However, effective sensor fusion network architectures for coping with sensor failures are not deeply investigated.
In terms of sensor fusion architectures, Patel et al. (2017) propose a so-called netgated architecture in which the information flow in a given convolutional neural network (CNN) is gated by fusion weights extracted from camera and LiDAR inputs. These fusion weights are used for computing a weighted sum of the sensory inputs. The weighted sum passes through fully connected layers to create a steering command. The gated networks (netgated) is shown to be robust to sensor failures, comparing to basic CNNs. However, a deep understanding of the relationships between sensory inputs, fusion weights, network architecture, and the resulting performances are not examined.
The main objective of this paper is to propose optimized gated architectures that can address three limitations of the baseline netgated architecture of Patel et al. (2017) and investigate how different fusion architectures operate under clean sensory data and in the presence of snesor noise and failures. Our main contributions are:
• Propose a new coarser-grained gated architecture which learns robustly a set of fusion weights at the (feature) group level;
• Further propose a two-stage gating architecture which exploits both the feature-level and group-level fusion weights, leading to further performance improvements.
• Analyze the characteristics of the proposed architectures and show how they may address the limitations of the negated architecture in terms of inconsistency of fusion weights, overfitting, and lack of diverse fusion mechanisms.
By utilizing driving mode prediction and human activity recognition datasets, we demonstrate the significant performance improvements brought by the proposed architectures over the conventional CNN and netgated architectures under various settings including cases where random sensor noise and failures are presented. Empirical evidence is also analyzed to help shed light on the underlying causes that may be responsible for the observed improvements of the two proposed architectures.
2 THE BASELINE NETGATED ARCHITECTURE AND ITS LIMITATIONS
2.1 THE NETGATED ARCHITECTURE
The netgated architecture proposed in Patel et al. (2017) offers a promising approach for sensor fusion. This architecture was proposed under the context of unmanned ground vehicle(UGV) autonomous driving with convolutional neural networks with two sensors. A more general version of this architecture (with five sensors/features) is depicted in Fig. 1. In Patel et al. (2017), data from two sensory inputs, i.e. camera and LiDAR are processed individually through convolutional (conv) layers, pooling layers, and fully connected layers. The outputs from the fully connected (FC) layers ( e.g. “FC-f1” to ”FC-f5” in the first dashed box in Fig. 1), are concatenated and then fused by another FC layer (e.g. “FC-con” in Fig. 1), where two feature-level fusion weights are created. Feature-level fusion weights are originally referred to as scalars in Patel et al. (2017). Note that each fusion weight is a scalar value and that in Fig. 1 five feature-level fusion weights are extracted which are the outputs of the “FC-con” layer. These fusion weights are multiplied with the corresponding outputs from the feature-level FC layers, i.e. the first dashed box, which is duplicated for better illustration of data flow in Fig. 1. Finally, these weighted feature outputs are fused by the last FC layer (i.e. “FC-out”), which produces the final prediction decision.
The negated architecture is interesting in the sense that the extracted feature-level weights may be conceptually thought as a “gated” variable for each feature input and that a sensory input (feature) may be shut off from the network by the zero-valued fusion weight when it is corrupted by noise or sensor failures. As such, this architectures may be promising in providing robust sensor fusion capability.
2.2 LIMITATIONS OF THE BASIC NETGATED ARCHITECTURE
The negated architecture offers as an appealing end-to-end deep learning solution. Nevertheless, partially due to its end-to-end black-box nature, this architecture has several limitations as discussed below.
Inconsistency of Fusion Weights. First, consider the situation in which there are N input features f1, f2, · · · , fN with the corresponding feature-level fusion weights fw1, fw2, · · · , fwN . As in Fig. 1, the feature-level fusion weights are produced by the “FC-con” layer based on fused information from all inputs. As a result, an extracted fusion weight might not fully correspond to the corresponding feature due to the information sharing among all features. As we have observed in our experimental studies, there exist cases where the feature with the largest extracted fusion weight does not represent the most critical feature for the learning task. While the ranking of the featurelevel weights may reflect the relative importance of the corresponding features to a certain degree, the association between the two is not always consistent. In this paper, we refer to this phenomenon as inconsistency of fusion weights. It can be well expected that inconsistency of fusion weights may adversely affect the overall prediction accuracy, particularly when the fusion weights for certain noisy or corrupted features are not robustly learned by the network, resulting misleadingly large fusion weight values.
Potential Over-fitting. Furthermore, for applications where many features need to be fused, using the same number fusion weight values introduces many additional parameters that shall be learned properly in the training process, making over-fitting of the model easier to occur. This situation further exacerbates due to the potential occurrence of inconsistency of fusion weights.
Lack of Additional Fusion Mechanisms. Finally, in the architecture of Fig. 1, apart from the learning of fusion weights, fusion of raw input features is done in a simplistic manner, i.e. by the last fully connected layer “FC-out”. Nevertheless, there exist more powerful raw input fusion mechanisms which could potentially lead to additional performance improvements.
We address the above limitations of the baseline netgated architectures by proposing two extensions: a coarser-grained architecture and a hierarchical two-stage architecture, referred to as the FeatureGroup Gated Fusion Architecture (FG-GFA) and the Two-Stage Gated Fusion Architecture (2SGFA), respectively, described in the following sections.
3 FEATURE-GROUP GATED FUSION ARCHITECTURE (FG-GFA)
To address the aforementioned limitations of the baseline netgated architecture, we first explore the coarser-grained architecture, namely, Feature-Group Gated Fusion Architecture (FG-GFA) as in Fig. 2, where for illustration purpose two feature groups are shown with the first group having three features and the second group two features. In general, a given set of N input features f1, f2, · · · , fN may be partitioned into, say M , feature groups FG1, FG2, · · · , FGM . As one specific example of this architecture, all features in a feature group are concatenated first, and then passed onto a convolution layer and a pooling layer. After going through the corresponding FC layer (“FC-g1” or “FC-g2” in Fig. 2), the processed information from all groups are concatenated and then passed onto an FC layer (“FC-con” in Fig. 2) whose outputs are split into M group-level fusion weights. The fused information of each group e.g. ‘FC-g1” or “FC-g2” in Fig. 2, is multiplied by the corresponding group-level fusion weight (“FC-g1” or “FC-g2” are again duplicated in Fig. 2 to better illustrate the information flow). All weighted group-level information is combined and then processed by the final FC layer (“FC-out” in Fig. 2) to produce the prediction decision. The configuration and the number of layers at each key step of the FG-GFA may be chosen differently from the specific example of Fig. 2.
We now comment on the key differences between the FG-GFA architecture and the baseline netgated architecture. First of all, in addition to the final fusion operation of the “FC-out” in Fig. 2, we have performed additional early fusion of sensory inputs within each group. The outputs of such within-group fusions are combined to produce a smaller number of group-level fusion weights. Furthermore, the extracted group-level weights are used to multiply the corresponding fused group feature information, not the individual feature information. These characteristics of FG-GFA introduce different types of fusion mechanisms into the network. Second, since now fusion weights are extracted only at the group-level, fewer weights need to be learned compared with the baseline architecture. The fact that there are now a less number of tuning parameters and that early fusion takes place within each group might reduce the likelihood of stucking the training process on local minima. As a result, it may be possible to mitigate the issues of inconsistency of fusion weights and potential over-fitting. As will be demonstrated in the experimental study, FG-GFA leads to significantly more robust learning of fusion weights at the group level, i.e. existence of noisy or corrupted features in a group can be much more reliably reflected in the corresponding reduction of the grouplevel fusion weight. As a result, group-level fusion weights become a more reliable indicator of the quality of the sensory inputs in each group. Accordingly, we have empirically observed improved performance brought by FG-GFA as demonstrated later.
4 THE PROPOSED TWO-STAGE GATED FUSION ARCHITECTURE (2S-GFA)
In this hierarchical fusion architecture, we combine the baseline netgated architecture that leans the feature-level fusion weights and the proposed feature-group gated fusion architecture (FG-GFA) that extracts group-level fusion weights into two stages. The 2S-GFA architecture is illustrated in Fig. 3 where the first three features are in group 1 and the remaining two features are in group 2.
The upper portion of the network extracts five feature-level fusion weights based on splitting the outputs of the FC layer “FC-con”, in the same way as in the baseline netgated architecture. The smaller sub-network at the bottom of Fig. 3 reuses outputs from the first stage of conv layers on the top of the figure that pre-process all sensory inputs individually. Then it concatenates the preprocessed feature information within each group. It produces two group-level fusion weights by splitting the outputs of the FC layer “FC-con-g”, shown by the red and yellow squares for the two groups, respectively. For each feature input, the product of its feature-level fusion weight and the group-level fusion weight defines its final feature weight, which is used to multiply the processed feature information, e.g. “FC-f1”” to “FC-f5” in Fig. 3. Then, all weighted feature information are fused together by the FC layer “FC-out” which produces the final decision.
Since 2S-GFA integrates the essential ingredients of the baseline netgated architecture and the group-based FG-GFA architecture, it improves over both of the two architectures. Note again that the final fusion weight employed for each feature is the product of the feature-level weight and the corresponding group-level weight. As a result, the final fusion weight combines the key information extracted from feature-based and group-based processing. Each group-level fusion weight can be reliably learned as in the FG-GFA architecture, as a result, the final fusion weight can more reliably reflect the importance of the corresponding feature with respect to the learning task at hand, and serves an effective gating mechanism for the feature. For example, as we have observed in our experimental study, the feature-level fusion weight for a noisy or corrupted sensory input may not fully reflect the degraded importance of that feature as in the case of the baseline architecture. The more reliable group-level fusion weight, however, can block this feature, i.e. by making the final feature weight (product of the feature-level and group-level weights) small. This property mitigates the issue of inconsistency of fusion weights of the baseline architecture.
On the other hand, compared with the FG-GFA architecture, 2S-GFA further leverages the information revealed by the feature-level fusion weights. Therefore, each sensory input can be gated at a finer granularity based on the feature-level fusion weight. As such, it may be expected that the 2S-GFA represents an optimized middle ground between the baseline netgated architecture and the coarser-grained FG-GPA architecture and that it can learn the structure of underlying data more effectively, as we will demonstrate experimentally.
5 EXPERIMENTAL SETTINGS
To validate the proposed FG-GFA and 2S-GFA architectures and compare them with the conventional non-gated and the baseline netgated architectures, we consider two applications: driving model prediction and human activity recognition on smartphones.
5.1 DATASETS, SETUPS FOR DATASETS, AND TOOL FLOW
Dataset for driving model prediction. We consider three driving modes: idle, eco, and normal. The idle mode corresponds to the situation in which the vehicle’s engine is on but the vehicle is not in motion. The vehicle is in the eco mode when the car is being driven for fuel efficiency. All other situations are labeled as the normal mode. The target is to predict the vehicle’s driving mode at the next time period given all the sensory inputs that have been available. We treat this application as a time-series prediction problem.
We have driven a 2014 Nissan Sentra for two months between two GPS coordinates to collect this driving dataset. The RPM and speed data are extracted from the on-board diagnostics (OBD2) in the vehicle. We use the y-axis reading of a gyroscope (GYRO Y) for measuring lateral acceleration, and the difference between the GPS headings in time (D HEADING) for the steering angle of a vehicle. All sensor data used in the driving data set are collected from Freematics ONE+ dongle(Huang, 2017). Due to the different extraction times periods of sensors in the Freematics ONE+ dongle, linear interpolation in time is used for data pre-processing. In total five types of sensory data are used for training the neural network: RPM, SPEED, acceleration and deceleration(D SPEED), y axis of the gyroscope(GYRO Y), and difference of GPS headings in time (D HEADING). These five features are sampled every 250ms. To predict the driving mode of the vehicle 5 seconds in the future, we use ten seconds of data (40 points) from each feature, normalize the collected data feature-wise, and concatenate them to form a feature vector window. The feature vector window slides every 250ms as the prediction proceeds in time. For the proposed FP-GFA and 2S-GFA architectures, RPM, SPEED, and D SPEED are included in the first group while the second group is composed of GYRO Y, and D HEADING. 5,845 examples are used for training, and 2,891 samples are used for testing We train different neural networks using 50,000 iterations.
Dataset for human activity recognition. We further consider the public-domain human activity recognition dataset(Anguita et al., 2013) which has six input features: three axes of an accelerometer (ACC X, ACC Y, ACC Z), and three axes of a gyroscope (GYRO X, GYRO Y, GYRO Z), and six activity classes: WALKING, WALKING UPSTAIRS, WALKING DOWNSTAIRS, SITTING, STANDING, and LAYING. These six features are sampled every 200ms. The same sliding window scheme is used to define input feature vectors. For the proposed group-level and two-stage gated architectures, the six features are split into two different groups. The first feature group has ACC X and ACC Y while ACC Z. GYRO X, GYRO Y, and GYRO Z are included in the second group. The training and test sets have 2,410 and 942 examples, respectively. 100,000 iterations are used for training various neural networks.
Adopted tool flow. The adopted simulation environment is based on Tensorflow 1.10.1(Abadi et al., 2015), a open source deep neural network library in Python.
5.2 CONFIGURATIONS OF FOUR COMPARED NEURAL NETWORKS
We compare the conventional CNN architecture, the baseline netgated architecture Patel et al. (2017), the proposed feature-group gated fusion architecture (FG-GFA), and the proposed two-stage gated fusion architecture (2S-GFA) by creating four corresponding neural network models. For fair comparison, we match the number of neurons and layers used in the processing common to all four networks as much as possible. Nevertheless, it shall be noted that compared with the CNN model, the netgated architecture has additional neurons for extracting feature-level fusion weights, FG-GFA employs additional neurons for extracting group-level fusion weights, and 2S-GFA employs both the feature and group level fusion weights. The configurations of the four neural networks are detailed in the Appendix.
5.3 SENSOR NOISE AND FAILURES
To more comprehensively evaluate the performance of different architectures, we employ the original data of the two adopted datasets, which are called “clean”, and also create variations of the datasets by introducing sensory noise and failures for both the training and testing subsets of the data.
We mimic the degradation of sensory inputs by adding random Gaussian noise to all components of the feature vector vn = vori(1 + γε), (1) where vori and vn are the original and noisy component value, respectively, ε is a random variable following the normal distribution ∼ N(0, 1), and γ controls the percentage of the added noise and is set at different levels: 5%, 10%, and 20%.
In addition, we modify the original datasets by mimicking occurrence of more catastrophic sensor failures during both training and testing. Here, at each time stamp, one feature is selected at random and the corresponding value of that feature is wiped out to zero in the feature vector.
6 EXPERIMENTAL RESULTS
We evaluate the performance of different architectures using our driving mode prediction dataset and the public-domain human activity recognition dataset(Anguita et al., 2013) based on the settings described in the previous section.
6.1 RESULTS ON DRIVING MODE PREDICTION
Predication accuracy with clean data. Fig. 4 shows that the proposed two-stage architecture has the best prediction accuracy and the group-level FG-GFA architecture produces the second best performance when the data has no additional noise or failures. The two proposed architectures significantly improve over the conventional (non-gated) CNN architecture and also lead to noticeable improvements over the baseline netgated architecture.
It is interesting to observe that while not producing the best performance for the testing set, the baseline netgated architecture has a loss less than those of the group-level and two-stage architectures for the training set as in Table 1, suggesting possible occurrence of over-fitting in the netgated architecture as discussed previously.
Predication accuracy with noise or sensory failures. To verify the robustness of the networks, we test four neural network models when different levels of Gaussian noises are introduced to the training and test data set. In Table. 2, the proposed two architectures produce robust performances with the two stage architecture being the best under all cases.
In Table. 3, we compare four different models under the introduction of random sensor failures. Since random failures are more catastrophic compared to Gaussian noise, the overall performances of all models drop in this case. Nevertheless, the proposed two models show the best performances and the two-stage architecture outperforms the baseline netgated architecture by nearly 3%.
Noise level Non-NetGated NetGated Group-level Gate Two-stage
Analysis of performance improvements of the proposed architectures. We provide additional insights on the possible causes for the observed performance improvements made by the proposed two-stage architecture. In our setup, the three essential features for driving mode prediction are RPM, speed, and D SPEED, which are included in group 1. Table. 4 shows the feature and group level fusion weights of the two-stage architecture based on the clean data. We add 20% Gaussian noise to RPM in the group1 and report the updated fusion weights in Table.5. It can be seen that the feature-level fusion weight of RPM drops rather noticeably, possibly reflecting the degraded quality of this sensory input.
In a different experiment, we only add 20% noise to D heading, which is a feature in the group2. As shown in Table.6, the feature-level fusion weight of D heading and the group-level fusion weight of the second group both drop in comparison to the case of the clean data. It is expected that the reduced weights will reduce the contribution of D heading to the final prediction decision.
6.2 HUMAN ACTIVITY RECOGNITION DATA SET
We adopt a similar approach to demonstrate the performances of various models on the human activity recognition data set.
Predication accuracy with clean data. Fig. 5 summarizes the performances of the four models based on the clean data. Again the two-stage 2S-GFA architecture produces the best performance and the proposed group-level architecture FG-GFA architecture produces the second best performance among the four models.
Table 5: Fusion weights of two-stage architecture under with 20% noise in RPM in the driving mode data.
RPM SPEED D SPEED GYRO Y D HEADING Feature-level Fusion Weight 0.15 0.22 0.18 0.27 0.18 Group-level Fusion Weight 0.57 0.43
Table 6: Fusion Weight Analysis with 20% of Noise in D HEADING using two-stage architecture.
RPM SPEED D SPEED GYRO Y D HEADING Feature-level Fusion Weight 0.29 0.30 0.1 0.16 0.14 Group-level Fusion Weight 0.77 0.23
Predication accuracy with noise or sensory failures. Table. 8 shows with increasing Gaussian noise, the prediction accuracy of all models drops. However, the robustness and improvements of the two proposed architectures over the other two models are clearly observable. Table. 7 summarizes the results when sensor failures are introduced. In this case, the accuracy of the Nonnetgated network model (conventional CNN) drops by 10%. Still, the two proposed architectures demonstrate improved robustness over the conventional CNN and the baseline netgated architectures. Specifically, for this more challenging test case, the two-stage gated architecture outperforms the non-netgated model by 5% and netgated model by 3%.
Noise level Non-NetGated NetGated Group-level Gate Two-stage
7 CONCLUSION
This paper proposes two optimized gated deep learning architectures based on CNNs for sensor fusion: a coarser-grained gated architecture with (feature) group-level fusion weights and a two-stage architecture with a combination of feature-level and group-level fusion weights. It has been shown that the proposed architectures outperform the conventional CNN architecture and the existing NetGated architecture under various settings. Especially, the proposed architectures demonstrate larger improvements in the presence of random sensor noise and failures. Our future work will extend the proposed architectures for more complex sensor applications which may include additional sensing modalities such as cameras and Lidars.
A.1.2 NETGATED ARCHITECTURE
A.1.1 NON-NETGATED ARCHITECTURE
A.1 NEURAL NETWORK ARCHITECTURES USED FOR DRIVING MODEL PREDICTION
A APPENDICES
A.1.4 THE PROPOSED TWO-STAGE GATED FUSION ARCHITECTURE
A.1.3 THE PROPOSED FEATURE-GROUP GATED FUSION ARCHITECTURE
A.2.2 THE PROPOSED FEATURE-GROUP GATED FUSION ARCHITECTURE
A.2.1 NETGATED ARCHITECTURE
A.2 NEURAL NETWORK ARCHITECTURES USED FOR HUMAN ACTIVITY RECOGNITION
A.2.3 THE PROPOSED TWO-STAGE GATED FUSION ARCHITECTURE | 1. What are the strengths and weaknesses of the proposed approach in tackling the problem of sensor fusion?
2. How does the reviewer assess the related work presented in the paper, and what other works should be included for a stronger comparison?
3. What are the limitations regarding the experimental methodology and results presentation?
4. How does the reviewer perceive the novelty and significance of the proposed modifications compared to prior works in deep multimodal learning?
5. Are there any minor comments or clarifications that could improve the quality of the paper? | Review | Review
This paper tackles the problem of sensor fusion, where multiple (possibly differing) sensor modalities are available and neural network architectures are used to combine information from them to perform prediction tasks. The paper proposed modifications to a gated fusion network specifically: 1) Grouping sets of sensors and concatenating them before further processing, and 2) Performing multi-level fusion where early sensor data representations are concatenated to produce weightings additional to the those obtained from features concatenated at a later stage. Experimental results show that these architectures achieve performance gains from 2-6%, especially when sensors are noisy or missing.
Strengths
+ The architectures encourage fusion at multiple levels (especially the second one), which is a concept that has been successful across the deep learning literature
+ The paper looks at an interesting topic, especially related to looking at the effects of noise and missing sensors on the gating mechanisms.
+ The results show some positive performance gains, although see caveats below.
Weaknesses
- The related work paragraph is extremely sparse. Fusion is an enormous field (see survey cited in this paper as well [1]), and I find the small choice of fusion results with a YouBot to be strange. A strong set of related work is necessary, focusing on those that are similar to the work. As an example spatiotemporal fusion (slow fusion [2]) bears some resemblence to this work but there are many others (e.g. [3,4] as a few examples).
[1] Ramachandram, Dhanesh, and Graham W. Taylor. "Deep multimodal learning: A survey on recent advances and trends." IEEE Signal Processing Magazine 34.6 (2017): 96-108.
Ramach
[2] Karpathy, Andrej, et al. "Large-scale video classification with convolutional neural networks." Proceedings of the IEEE conference on Computer Vision and Pattern Recognition. 2014
[3] Mees, Oier, Andreas Eitel, and Wolfram Burgard. "Choosing smartly: Adaptive multimodal fusion for object detection in changing environments." Intelligent Robots and Systems (IROS), 2016 IEEE/RSJ International Conference on. IEEE, 2016.
[4] Kim, J., Koh, J., Kim, Y., Choi, J., Hwang, Y., & Choi, J. W. (2018). Robust Deep Multi-modal Learning Based on Gated Information Fusion Network. arXiv preprint arXiv:1807.06233.
- The paper claims to provide a "deep understanding of the relationships between sensory inputs, fusion weights, network architecture, and resulting performance". I don't think it really achieves
this with the small examples of weights for some simple situations.
- It is very unclear whether the architectures have more or less parameters. At one point it is stated that the original architecture overfits and the new architecture has less parameters (Sec 2.2 and 3). But then it is stated for fairness the number of neurons is equalized (5.2), and later in that section that the new architectures have additional neurons. Which of these is accurate?
- Related to the previous point, and possibly the biggest weakness, the experimental methodology makes it hard to tell if performance is actually improved. For example, it is not clear to me that the performance gains are not just a result of less overfitting (for whatever reason) of baselines and that the fixed number of epochs therefore results in stopping at a better performance. Please show training and validation curves so that we can see whether the epochs chosen for the baselines are not just chosen after overfitting (in which case early stopping will improve the performance). As another example, there are no variances shown in the bar graphs.
- The examples with noise and failures are limited. For example, it is also not clear why an increase of noise in the RPM feature (Table 5) actually increases the weight of that group in the two-stage architecture. What does that mean? In general there isn't any principled method proposed for analyzing these situations.
Some minor comments/clarifications:
- What is the difference between these gated networks and attentional mechanisms, e.g. alpha attention (see "Attention is all you need" paper)?
- What is a principled method to decide on the groupings?
- There are several typos throughout the paper
* "in the presence of snesor" => "in the presence of sensor"
* Throughout the paper: "Predication" => "Prediction"
* "Likelihood of stucking the training"
- Tensorflow is not a simulation environment
Overall, the paper proposes architectural changes to an existing method for fusion, and while positive results are demonstrated there are several issues in the experimental methodology that make it unclear where the benefits come from. Further, the paper lacks novelty as multi-level fusion has been explored significantly and the changes are rather minor. There is no principled method or concepts that drive the architectural changes, and while the authors claim a deeper investigation into the networks' effectiveness under noise and failures the actual analysis is too shallow. |
ICLR | Title
Optimized Gated Deep Learning Architectures for Sensor Fusion
Abstract
Sensor fusion is a key technology that integrates various sensory inputs to allow for robust decision making in many applications such as autonomous driving and robot control. Deep neural networks have been adopted for sensor fusion in a body of recent studies. Among these, the so-called netgated architecture was proposed, which has demonstrated improved performances over the conventional convolutional neural networks (CNN). In this paper, we address several limitations of the baseline negated architecture by proposing two further optimized architectures: a coarser-grained gated architecture employing (feature) group-level fusion weights and a two-stage gated architectures leveraging both the group-level and featurelevel fusion weights. Using driving mode prediction and human activity recognition datasets, we demonstrate the significant performance improvements brought by the proposed gated architectures and also their robustness in the presence of sensor noise and failures.
1 INTRODUCTION
Sensor fusion is an essential technology to autonomous systems such as self-driving cars and mobile robots. In advanced driver-assistance systems (ADAS), many sensors such as cameras, ultrasonic sensors, and LiDARs are utilized for enhanced safety and driving experience. Sensor fusion based vehicle and robot control technologies have been explored (Vargas-Meléndez et al., 2016; Jain et al., 2016; Garcia et al., 2017; Bohez et al., 2017; Patel et al., 2017). In addition, devices like smartphones and smartwatches typically integrate a number of sensors, making these devices a suitable platform for running sensor fusion applications such as activity recognition. Several sensor fusion techniques for activity recognition have been proposed (Yurtman & Barshan, 2017; Dehzangi et al., 2017; Zhao & Zhou, 2017; Gravina et al., 2017).
More specifically, in Bohez et al. (2017), a deep reinforcement learning based sensor fusion algorithm is discussed for robot control. A Kuka YouBot is used with multiple LiDAR sensors for simulations. In terms of sensor fusion, early fusion which concatenates all inputs as feature planes is compared with three other fusion techniques: concatenating convolution layer outputs, reducing the concatenated features with a 1x1 convolution, and accumulating convolution outputs. However, sensor noise and failures are not considered in this work. In addition, due to the fact that only the same type of sensory inputs is used in the experiments, the performance of sensor fusion based on different kinds of sensory inputs is unclear.
Among sensor fusion techniques employed in automobiles, Vargas-Meléndez et al. (2016) exploit neural networks with a Kalman filter for vehicle roll angle estimation, and show the advantage of using an inertial measurement unit (IMU) without additional suspension deflection sensors. Jain et al. (2016) consider a sensor-rich platform with a learning algorithm for maneuver prediction. Long short-term memory (LSTM) networks, which are a type of recurrent neural networks (RNN) are used with sensory inputs from cameras, GPS, and speedometers. Garcia et al. (2017) propose joint probabilistic data fusion for road environments. However, neither a multiplicity of sensory inputs nor sensory noise and failures are considered. The adopted architecture is simplistic where input data are only fused in one layer.
In the field of wearable devices, Yurtman & Barshan (2017) utilize early fusion, which concatenates sensory inputs. With this simple fusion approach, classical supervised learning methods such as Bayesian classifiers, k-nearest-neighbors, support vector machines, and artificial neural networks
are compared. Dehzangi et al. (2017) use deep convolutional neural networks with IMU data. Zhao & Zhou (2017) use a CNN with angle embedded gate dynamic images, which are pre-processed inputs for gait recognition. Gravina et al. (2017) summarize three fusion methods, namely, data, feature, and decision level fusion. However, effective sensor fusion network architectures for coping with sensor failures are not deeply investigated.
In terms of sensor fusion architectures, Patel et al. (2017) propose a so-called netgated architecture in which the information flow in a given convolutional neural network (CNN) is gated by fusion weights extracted from camera and LiDAR inputs. These fusion weights are used for computing a weighted sum of the sensory inputs. The weighted sum passes through fully connected layers to create a steering command. The gated networks (netgated) is shown to be robust to sensor failures, comparing to basic CNNs. However, a deep understanding of the relationships between sensory inputs, fusion weights, network architecture, and the resulting performances are not examined.
The main objective of this paper is to propose optimized gated architectures that can address three limitations of the baseline netgated architecture of Patel et al. (2017) and investigate how different fusion architectures operate under clean sensory data and in the presence of snesor noise and failures. Our main contributions are:
• Propose a new coarser-grained gated architecture which learns robustly a set of fusion weights at the (feature) group level;
• Further propose a two-stage gating architecture which exploits both the feature-level and group-level fusion weights, leading to further performance improvements.
• Analyze the characteristics of the proposed architectures and show how they may address the limitations of the negated architecture in terms of inconsistency of fusion weights, overfitting, and lack of diverse fusion mechanisms.
By utilizing driving mode prediction and human activity recognition datasets, we demonstrate the significant performance improvements brought by the proposed architectures over the conventional CNN and netgated architectures under various settings including cases where random sensor noise and failures are presented. Empirical evidence is also analyzed to help shed light on the underlying causes that may be responsible for the observed improvements of the two proposed architectures.
2 THE BASELINE NETGATED ARCHITECTURE AND ITS LIMITATIONS
2.1 THE NETGATED ARCHITECTURE
The netgated architecture proposed in Patel et al. (2017) offers a promising approach for sensor fusion. This architecture was proposed under the context of unmanned ground vehicle(UGV) autonomous driving with convolutional neural networks with two sensors. A more general version of this architecture (with five sensors/features) is depicted in Fig. 1. In Patel et al. (2017), data from two sensory inputs, i.e. camera and LiDAR are processed individually through convolutional (conv) layers, pooling layers, and fully connected layers. The outputs from the fully connected (FC) layers ( e.g. “FC-f1” to ”FC-f5” in the first dashed box in Fig. 1), are concatenated and then fused by another FC layer (e.g. “FC-con” in Fig. 1), where two feature-level fusion weights are created. Feature-level fusion weights are originally referred to as scalars in Patel et al. (2017). Note that each fusion weight is a scalar value and that in Fig. 1 five feature-level fusion weights are extracted which are the outputs of the “FC-con” layer. These fusion weights are multiplied with the corresponding outputs from the feature-level FC layers, i.e. the first dashed box, which is duplicated for better illustration of data flow in Fig. 1. Finally, these weighted feature outputs are fused by the last FC layer (i.e. “FC-out”), which produces the final prediction decision.
The negated architecture is interesting in the sense that the extracted feature-level weights may be conceptually thought as a “gated” variable for each feature input and that a sensory input (feature) may be shut off from the network by the zero-valued fusion weight when it is corrupted by noise or sensor failures. As such, this architectures may be promising in providing robust sensor fusion capability.
2.2 LIMITATIONS OF THE BASIC NETGATED ARCHITECTURE
The negated architecture offers as an appealing end-to-end deep learning solution. Nevertheless, partially due to its end-to-end black-box nature, this architecture has several limitations as discussed below.
Inconsistency of Fusion Weights. First, consider the situation in which there are N input features f1, f2, · · · , fN with the corresponding feature-level fusion weights fw1, fw2, · · · , fwN . As in Fig. 1, the feature-level fusion weights are produced by the “FC-con” layer based on fused information from all inputs. As a result, an extracted fusion weight might not fully correspond to the corresponding feature due to the information sharing among all features. As we have observed in our experimental studies, there exist cases where the feature with the largest extracted fusion weight does not represent the most critical feature for the learning task. While the ranking of the featurelevel weights may reflect the relative importance of the corresponding features to a certain degree, the association between the two is not always consistent. In this paper, we refer to this phenomenon as inconsistency of fusion weights. It can be well expected that inconsistency of fusion weights may adversely affect the overall prediction accuracy, particularly when the fusion weights for certain noisy or corrupted features are not robustly learned by the network, resulting misleadingly large fusion weight values.
Potential Over-fitting. Furthermore, for applications where many features need to be fused, using the same number fusion weight values introduces many additional parameters that shall be learned properly in the training process, making over-fitting of the model easier to occur. This situation further exacerbates due to the potential occurrence of inconsistency of fusion weights.
Lack of Additional Fusion Mechanisms. Finally, in the architecture of Fig. 1, apart from the learning of fusion weights, fusion of raw input features is done in a simplistic manner, i.e. by the last fully connected layer “FC-out”. Nevertheless, there exist more powerful raw input fusion mechanisms which could potentially lead to additional performance improvements.
We address the above limitations of the baseline netgated architectures by proposing two extensions: a coarser-grained architecture and a hierarchical two-stage architecture, referred to as the FeatureGroup Gated Fusion Architecture (FG-GFA) and the Two-Stage Gated Fusion Architecture (2SGFA), respectively, described in the following sections.
3 FEATURE-GROUP GATED FUSION ARCHITECTURE (FG-GFA)
To address the aforementioned limitations of the baseline netgated architecture, we first explore the coarser-grained architecture, namely, Feature-Group Gated Fusion Architecture (FG-GFA) as in Fig. 2, where for illustration purpose two feature groups are shown with the first group having three features and the second group two features. In general, a given set of N input features f1, f2, · · · , fN may be partitioned into, say M , feature groups FG1, FG2, · · · , FGM . As one specific example of this architecture, all features in a feature group are concatenated first, and then passed onto a convolution layer and a pooling layer. After going through the corresponding FC layer (“FC-g1” or “FC-g2” in Fig. 2), the processed information from all groups are concatenated and then passed onto an FC layer (“FC-con” in Fig. 2) whose outputs are split into M group-level fusion weights. The fused information of each group e.g. ‘FC-g1” or “FC-g2” in Fig. 2, is multiplied by the corresponding group-level fusion weight (“FC-g1” or “FC-g2” are again duplicated in Fig. 2 to better illustrate the information flow). All weighted group-level information is combined and then processed by the final FC layer (“FC-out” in Fig. 2) to produce the prediction decision. The configuration and the number of layers at each key step of the FG-GFA may be chosen differently from the specific example of Fig. 2.
We now comment on the key differences between the FG-GFA architecture and the baseline netgated architecture. First of all, in addition to the final fusion operation of the “FC-out” in Fig. 2, we have performed additional early fusion of sensory inputs within each group. The outputs of such within-group fusions are combined to produce a smaller number of group-level fusion weights. Furthermore, the extracted group-level weights are used to multiply the corresponding fused group feature information, not the individual feature information. These characteristics of FG-GFA introduce different types of fusion mechanisms into the network. Second, since now fusion weights are extracted only at the group-level, fewer weights need to be learned compared with the baseline architecture. The fact that there are now a less number of tuning parameters and that early fusion takes place within each group might reduce the likelihood of stucking the training process on local minima. As a result, it may be possible to mitigate the issues of inconsistency of fusion weights and potential over-fitting. As will be demonstrated in the experimental study, FG-GFA leads to significantly more robust learning of fusion weights at the group level, i.e. existence of noisy or corrupted features in a group can be much more reliably reflected in the corresponding reduction of the grouplevel fusion weight. As a result, group-level fusion weights become a more reliable indicator of the quality of the sensory inputs in each group. Accordingly, we have empirically observed improved performance brought by FG-GFA as demonstrated later.
4 THE PROPOSED TWO-STAGE GATED FUSION ARCHITECTURE (2S-GFA)
In this hierarchical fusion architecture, we combine the baseline netgated architecture that leans the feature-level fusion weights and the proposed feature-group gated fusion architecture (FG-GFA) that extracts group-level fusion weights into two stages. The 2S-GFA architecture is illustrated in Fig. 3 where the first three features are in group 1 and the remaining two features are in group 2.
The upper portion of the network extracts five feature-level fusion weights based on splitting the outputs of the FC layer “FC-con”, in the same way as in the baseline netgated architecture. The smaller sub-network at the bottom of Fig. 3 reuses outputs from the first stage of conv layers on the top of the figure that pre-process all sensory inputs individually. Then it concatenates the preprocessed feature information within each group. It produces two group-level fusion weights by splitting the outputs of the FC layer “FC-con-g”, shown by the red and yellow squares for the two groups, respectively. For each feature input, the product of its feature-level fusion weight and the group-level fusion weight defines its final feature weight, which is used to multiply the processed feature information, e.g. “FC-f1”” to “FC-f5” in Fig. 3. Then, all weighted feature information are fused together by the FC layer “FC-out” which produces the final decision.
Since 2S-GFA integrates the essential ingredients of the baseline netgated architecture and the group-based FG-GFA architecture, it improves over both of the two architectures. Note again that the final fusion weight employed for each feature is the product of the feature-level weight and the corresponding group-level weight. As a result, the final fusion weight combines the key information extracted from feature-based and group-based processing. Each group-level fusion weight can be reliably learned as in the FG-GFA architecture, as a result, the final fusion weight can more reliably reflect the importance of the corresponding feature with respect to the learning task at hand, and serves an effective gating mechanism for the feature. For example, as we have observed in our experimental study, the feature-level fusion weight for a noisy or corrupted sensory input may not fully reflect the degraded importance of that feature as in the case of the baseline architecture. The more reliable group-level fusion weight, however, can block this feature, i.e. by making the final feature weight (product of the feature-level and group-level weights) small. This property mitigates the issue of inconsistency of fusion weights of the baseline architecture.
On the other hand, compared with the FG-GFA architecture, 2S-GFA further leverages the information revealed by the feature-level fusion weights. Therefore, each sensory input can be gated at a finer granularity based on the feature-level fusion weight. As such, it may be expected that the 2S-GFA represents an optimized middle ground between the baseline netgated architecture and the coarser-grained FG-GPA architecture and that it can learn the structure of underlying data more effectively, as we will demonstrate experimentally.
5 EXPERIMENTAL SETTINGS
To validate the proposed FG-GFA and 2S-GFA architectures and compare them with the conventional non-gated and the baseline netgated architectures, we consider two applications: driving model prediction and human activity recognition on smartphones.
5.1 DATASETS, SETUPS FOR DATASETS, AND TOOL FLOW
Dataset for driving model prediction. We consider three driving modes: idle, eco, and normal. The idle mode corresponds to the situation in which the vehicle’s engine is on but the vehicle is not in motion. The vehicle is in the eco mode when the car is being driven for fuel efficiency. All other situations are labeled as the normal mode. The target is to predict the vehicle’s driving mode at the next time period given all the sensory inputs that have been available. We treat this application as a time-series prediction problem.
We have driven a 2014 Nissan Sentra for two months between two GPS coordinates to collect this driving dataset. The RPM and speed data are extracted from the on-board diagnostics (OBD2) in the vehicle. We use the y-axis reading of a gyroscope (GYRO Y) for measuring lateral acceleration, and the difference between the GPS headings in time (D HEADING) for the steering angle of a vehicle. All sensor data used in the driving data set are collected from Freematics ONE+ dongle(Huang, 2017). Due to the different extraction times periods of sensors in the Freematics ONE+ dongle, linear interpolation in time is used for data pre-processing. In total five types of sensory data are used for training the neural network: RPM, SPEED, acceleration and deceleration(D SPEED), y axis of the gyroscope(GYRO Y), and difference of GPS headings in time (D HEADING). These five features are sampled every 250ms. To predict the driving mode of the vehicle 5 seconds in the future, we use ten seconds of data (40 points) from each feature, normalize the collected data feature-wise, and concatenate them to form a feature vector window. The feature vector window slides every 250ms as the prediction proceeds in time. For the proposed FP-GFA and 2S-GFA architectures, RPM, SPEED, and D SPEED are included in the first group while the second group is composed of GYRO Y, and D HEADING. 5,845 examples are used for training, and 2,891 samples are used for testing We train different neural networks using 50,000 iterations.
Dataset for human activity recognition. We further consider the public-domain human activity recognition dataset(Anguita et al., 2013) which has six input features: three axes of an accelerometer (ACC X, ACC Y, ACC Z), and three axes of a gyroscope (GYRO X, GYRO Y, GYRO Z), and six activity classes: WALKING, WALKING UPSTAIRS, WALKING DOWNSTAIRS, SITTING, STANDING, and LAYING. These six features are sampled every 200ms. The same sliding window scheme is used to define input feature vectors. For the proposed group-level and two-stage gated architectures, the six features are split into two different groups. The first feature group has ACC X and ACC Y while ACC Z. GYRO X, GYRO Y, and GYRO Z are included in the second group. The training and test sets have 2,410 and 942 examples, respectively. 100,000 iterations are used for training various neural networks.
Adopted tool flow. The adopted simulation environment is based on Tensorflow 1.10.1(Abadi et al., 2015), a open source deep neural network library in Python.
5.2 CONFIGURATIONS OF FOUR COMPARED NEURAL NETWORKS
We compare the conventional CNN architecture, the baseline netgated architecture Patel et al. (2017), the proposed feature-group gated fusion architecture (FG-GFA), and the proposed two-stage gated fusion architecture (2S-GFA) by creating four corresponding neural network models. For fair comparison, we match the number of neurons and layers used in the processing common to all four networks as much as possible. Nevertheless, it shall be noted that compared with the CNN model, the netgated architecture has additional neurons for extracting feature-level fusion weights, FG-GFA employs additional neurons for extracting group-level fusion weights, and 2S-GFA employs both the feature and group level fusion weights. The configurations of the four neural networks are detailed in the Appendix.
5.3 SENSOR NOISE AND FAILURES
To more comprehensively evaluate the performance of different architectures, we employ the original data of the two adopted datasets, which are called “clean”, and also create variations of the datasets by introducing sensory noise and failures for both the training and testing subsets of the data.
We mimic the degradation of sensory inputs by adding random Gaussian noise to all components of the feature vector vn = vori(1 + γε), (1) where vori and vn are the original and noisy component value, respectively, ε is a random variable following the normal distribution ∼ N(0, 1), and γ controls the percentage of the added noise and is set at different levels: 5%, 10%, and 20%.
In addition, we modify the original datasets by mimicking occurrence of more catastrophic sensor failures during both training and testing. Here, at each time stamp, one feature is selected at random and the corresponding value of that feature is wiped out to zero in the feature vector.
6 EXPERIMENTAL RESULTS
We evaluate the performance of different architectures using our driving mode prediction dataset and the public-domain human activity recognition dataset(Anguita et al., 2013) based on the settings described in the previous section.
6.1 RESULTS ON DRIVING MODE PREDICTION
Predication accuracy with clean data. Fig. 4 shows that the proposed two-stage architecture has the best prediction accuracy and the group-level FG-GFA architecture produces the second best performance when the data has no additional noise or failures. The two proposed architectures significantly improve over the conventional (non-gated) CNN architecture and also lead to noticeable improvements over the baseline netgated architecture.
It is interesting to observe that while not producing the best performance for the testing set, the baseline netgated architecture has a loss less than those of the group-level and two-stage architectures for the training set as in Table 1, suggesting possible occurrence of over-fitting in the netgated architecture as discussed previously.
Predication accuracy with noise or sensory failures. To verify the robustness of the networks, we test four neural network models when different levels of Gaussian noises are introduced to the training and test data set. In Table. 2, the proposed two architectures produce robust performances with the two stage architecture being the best under all cases.
In Table. 3, we compare four different models under the introduction of random sensor failures. Since random failures are more catastrophic compared to Gaussian noise, the overall performances of all models drop in this case. Nevertheless, the proposed two models show the best performances and the two-stage architecture outperforms the baseline netgated architecture by nearly 3%.
Noise level Non-NetGated NetGated Group-level Gate Two-stage
Analysis of performance improvements of the proposed architectures. We provide additional insights on the possible causes for the observed performance improvements made by the proposed two-stage architecture. In our setup, the three essential features for driving mode prediction are RPM, speed, and D SPEED, which are included in group 1. Table. 4 shows the feature and group level fusion weights of the two-stage architecture based on the clean data. We add 20% Gaussian noise to RPM in the group1 and report the updated fusion weights in Table.5. It can be seen that the feature-level fusion weight of RPM drops rather noticeably, possibly reflecting the degraded quality of this sensory input.
In a different experiment, we only add 20% noise to D heading, which is a feature in the group2. As shown in Table.6, the feature-level fusion weight of D heading and the group-level fusion weight of the second group both drop in comparison to the case of the clean data. It is expected that the reduced weights will reduce the contribution of D heading to the final prediction decision.
6.2 HUMAN ACTIVITY RECOGNITION DATA SET
We adopt a similar approach to demonstrate the performances of various models on the human activity recognition data set.
Predication accuracy with clean data. Fig. 5 summarizes the performances of the four models based on the clean data. Again the two-stage 2S-GFA architecture produces the best performance and the proposed group-level architecture FG-GFA architecture produces the second best performance among the four models.
Table 5: Fusion weights of two-stage architecture under with 20% noise in RPM in the driving mode data.
RPM SPEED D SPEED GYRO Y D HEADING Feature-level Fusion Weight 0.15 0.22 0.18 0.27 0.18 Group-level Fusion Weight 0.57 0.43
Table 6: Fusion Weight Analysis with 20% of Noise in D HEADING using two-stage architecture.
RPM SPEED D SPEED GYRO Y D HEADING Feature-level Fusion Weight 0.29 0.30 0.1 0.16 0.14 Group-level Fusion Weight 0.77 0.23
Predication accuracy with noise or sensory failures. Table. 8 shows with increasing Gaussian noise, the prediction accuracy of all models drops. However, the robustness and improvements of the two proposed architectures over the other two models are clearly observable. Table. 7 summarizes the results when sensor failures are introduced. In this case, the accuracy of the Nonnetgated network model (conventional CNN) drops by 10%. Still, the two proposed architectures demonstrate improved robustness over the conventional CNN and the baseline netgated architectures. Specifically, for this more challenging test case, the two-stage gated architecture outperforms the non-netgated model by 5% and netgated model by 3%.
Noise level Non-NetGated NetGated Group-level Gate Two-stage
7 CONCLUSION
This paper proposes two optimized gated deep learning architectures based on CNNs for sensor fusion: a coarser-grained gated architecture with (feature) group-level fusion weights and a two-stage architecture with a combination of feature-level and group-level fusion weights. It has been shown that the proposed architectures outperform the conventional CNN architecture and the existing NetGated architecture under various settings. Especially, the proposed architectures demonstrate larger improvements in the presence of random sensor noise and failures. Our future work will extend the proposed architectures for more complex sensor applications which may include additional sensing modalities such as cameras and Lidars.
A.1.2 NETGATED ARCHITECTURE
A.1.1 NON-NETGATED ARCHITECTURE
A.1 NEURAL NETWORK ARCHITECTURES USED FOR DRIVING MODEL PREDICTION
A APPENDICES
A.1.4 THE PROPOSED TWO-STAGE GATED FUSION ARCHITECTURE
A.1.3 THE PROPOSED FEATURE-GROUP GATED FUSION ARCHITECTURE
A.2.2 THE PROPOSED FEATURE-GROUP GATED FUSION ARCHITECTURE
A.2.1 NETGATED ARCHITECTURE
A.2 NEURAL NETWORK ARCHITECTURES USED FOR HUMAN ACTIVITY RECOGNITION
A.2.3 THE PROPOSED TWO-STAGE GATED FUSION ARCHITECTURE | 1. What is the main contribution of the paper, and how does it improve upon previous work in multisensor fusion?
2. What are the strengths and weaknesses of the proposed approach, particularly regarding its novelty and potential overfitting?
3. How does the two-stage gated fusion architecture affect the number of parameters and running time compared to the baseline model?
4. Could the authors provide more clarity and visualizations regarding the proposed architecture and its performance?
5. Are there any presentation issues or suggestions for improvement regarding tables, figures, and referencing? | Review | Review
Overview and contributions: The authors improve upon several limitations of the baseline negated architecture by proposing 1) a coarser-grained gated fusion architecture and 2) a two-stage gated fusion architecture. The authors show improvements in driving mode prediction and human activity recognition in settings where all modalities are observed as well as settings where there are noisy or missing modalities.
Strengths:
1. The model seems interesting and tackles the difficult problem of multisensor fusion under both normal and noisy settings.
2. Good results obtained on standard benchmarks with improvements in settings where all modalities are observed as well as settings where there are noisy or missing modalities.
Weaknesses:
1. I am worried about the novelty of the proposed approach. The main idea for the fusion-group gated fusion architecture is to perform additional early fusion of sensory inputs within each group which reduces the number of group-level fusion weights and therefore the number of parameters to tune. The two-stage gated fusion architecture simply combines the baseline model and the proposed fusion-group model. Both these ideas seem relatively incremental.
2. Doesn't the final two-stage gated fusion architecture further increase the number of parameters as compared to the baseline model? I believe there are several additional FC-NN blocks in Figure 3 and more attention gating weights. I find this counterintuitive since section 2.2 motivated "Potential Over-fitting" as one drawback of the baseline Netgated architecture. How does the increase in parameters for the final model affect the running time and convergence?
Questions to authors:
1. I don't understand Tables 4,5,6. Why are the results for Group-level Fusion Weight in the middle of several columns? Which features are being used in which groups? Please make this clear using vertical separators.
2. For the proposed two-stage gated fusion architecture, do the 2 branches learn different things (i.e focus on different portions of the multimodal inputs)? I would have liked to see more visualizations and analysis instead of just qualitative results.
Presentation improvements, typos, edits, style, missing references:
1. General poor presentation of experimental results. Tables are not clear and bar graphs are not professionally drawn. The paper extends to 9 pages when a lot of space could be saved by making the presentation of experimental results more compact. I believe the guidelines mention that more pages can be used if there are extensive results, but I don't think the experimental results warrant the extra page. |
ICLR | Title
Optimized Gated Deep Learning Architectures for Sensor Fusion
Abstract
Sensor fusion is a key technology that integrates various sensory inputs to allow for robust decision making in many applications such as autonomous driving and robot control. Deep neural networks have been adopted for sensor fusion in a body of recent studies. Among these, the so-called netgated architecture was proposed, which has demonstrated improved performances over the conventional convolutional neural networks (CNN). In this paper, we address several limitations of the baseline negated architecture by proposing two further optimized architectures: a coarser-grained gated architecture employing (feature) group-level fusion weights and a two-stage gated architectures leveraging both the group-level and featurelevel fusion weights. Using driving mode prediction and human activity recognition datasets, we demonstrate the significant performance improvements brought by the proposed gated architectures and also their robustness in the presence of sensor noise and failures.
1 INTRODUCTION
Sensor fusion is an essential technology to autonomous systems such as self-driving cars and mobile robots. In advanced driver-assistance systems (ADAS), many sensors such as cameras, ultrasonic sensors, and LiDARs are utilized for enhanced safety and driving experience. Sensor fusion based vehicle and robot control technologies have been explored (Vargas-Meléndez et al., 2016; Jain et al., 2016; Garcia et al., 2017; Bohez et al., 2017; Patel et al., 2017). In addition, devices like smartphones and smartwatches typically integrate a number of sensors, making these devices a suitable platform for running sensor fusion applications such as activity recognition. Several sensor fusion techniques for activity recognition have been proposed (Yurtman & Barshan, 2017; Dehzangi et al., 2017; Zhao & Zhou, 2017; Gravina et al., 2017).
More specifically, in Bohez et al. (2017), a deep reinforcement learning based sensor fusion algorithm is discussed for robot control. A Kuka YouBot is used with multiple LiDAR sensors for simulations. In terms of sensor fusion, early fusion which concatenates all inputs as feature planes is compared with three other fusion techniques: concatenating convolution layer outputs, reducing the concatenated features with a 1x1 convolution, and accumulating convolution outputs. However, sensor noise and failures are not considered in this work. In addition, due to the fact that only the same type of sensory inputs is used in the experiments, the performance of sensor fusion based on different kinds of sensory inputs is unclear.
Among sensor fusion techniques employed in automobiles, Vargas-Meléndez et al. (2016) exploit neural networks with a Kalman filter for vehicle roll angle estimation, and show the advantage of using an inertial measurement unit (IMU) without additional suspension deflection sensors. Jain et al. (2016) consider a sensor-rich platform with a learning algorithm for maneuver prediction. Long short-term memory (LSTM) networks, which are a type of recurrent neural networks (RNN) are used with sensory inputs from cameras, GPS, and speedometers. Garcia et al. (2017) propose joint probabilistic data fusion for road environments. However, neither a multiplicity of sensory inputs nor sensory noise and failures are considered. The adopted architecture is simplistic where input data are only fused in one layer.
In the field of wearable devices, Yurtman & Barshan (2017) utilize early fusion, which concatenates sensory inputs. With this simple fusion approach, classical supervised learning methods such as Bayesian classifiers, k-nearest-neighbors, support vector machines, and artificial neural networks
are compared. Dehzangi et al. (2017) use deep convolutional neural networks with IMU data. Zhao & Zhou (2017) use a CNN with angle embedded gate dynamic images, which are pre-processed inputs for gait recognition. Gravina et al. (2017) summarize three fusion methods, namely, data, feature, and decision level fusion. However, effective sensor fusion network architectures for coping with sensor failures are not deeply investigated.
In terms of sensor fusion architectures, Patel et al. (2017) propose a so-called netgated architecture in which the information flow in a given convolutional neural network (CNN) is gated by fusion weights extracted from camera and LiDAR inputs. These fusion weights are used for computing a weighted sum of the sensory inputs. The weighted sum passes through fully connected layers to create a steering command. The gated networks (netgated) is shown to be robust to sensor failures, comparing to basic CNNs. However, a deep understanding of the relationships between sensory inputs, fusion weights, network architecture, and the resulting performances are not examined.
The main objective of this paper is to propose optimized gated architectures that can address three limitations of the baseline netgated architecture of Patel et al. (2017) and investigate how different fusion architectures operate under clean sensory data and in the presence of snesor noise and failures. Our main contributions are:
• Propose a new coarser-grained gated architecture which learns robustly a set of fusion weights at the (feature) group level;
• Further propose a two-stage gating architecture which exploits both the feature-level and group-level fusion weights, leading to further performance improvements.
• Analyze the characteristics of the proposed architectures and show how they may address the limitations of the negated architecture in terms of inconsistency of fusion weights, overfitting, and lack of diverse fusion mechanisms.
By utilizing driving mode prediction and human activity recognition datasets, we demonstrate the significant performance improvements brought by the proposed architectures over the conventional CNN and netgated architectures under various settings including cases where random sensor noise and failures are presented. Empirical evidence is also analyzed to help shed light on the underlying causes that may be responsible for the observed improvements of the two proposed architectures.
2 THE BASELINE NETGATED ARCHITECTURE AND ITS LIMITATIONS
2.1 THE NETGATED ARCHITECTURE
The netgated architecture proposed in Patel et al. (2017) offers a promising approach for sensor fusion. This architecture was proposed under the context of unmanned ground vehicle(UGV) autonomous driving with convolutional neural networks with two sensors. A more general version of this architecture (with five sensors/features) is depicted in Fig. 1. In Patel et al. (2017), data from two sensory inputs, i.e. camera and LiDAR are processed individually through convolutional (conv) layers, pooling layers, and fully connected layers. The outputs from the fully connected (FC) layers ( e.g. “FC-f1” to ”FC-f5” in the first dashed box in Fig. 1), are concatenated and then fused by another FC layer (e.g. “FC-con” in Fig. 1), where two feature-level fusion weights are created. Feature-level fusion weights are originally referred to as scalars in Patel et al. (2017). Note that each fusion weight is a scalar value and that in Fig. 1 five feature-level fusion weights are extracted which are the outputs of the “FC-con” layer. These fusion weights are multiplied with the corresponding outputs from the feature-level FC layers, i.e. the first dashed box, which is duplicated for better illustration of data flow in Fig. 1. Finally, these weighted feature outputs are fused by the last FC layer (i.e. “FC-out”), which produces the final prediction decision.
The negated architecture is interesting in the sense that the extracted feature-level weights may be conceptually thought as a “gated” variable for each feature input and that a sensory input (feature) may be shut off from the network by the zero-valued fusion weight when it is corrupted by noise or sensor failures. As such, this architectures may be promising in providing robust sensor fusion capability.
2.2 LIMITATIONS OF THE BASIC NETGATED ARCHITECTURE
The negated architecture offers as an appealing end-to-end deep learning solution. Nevertheless, partially due to its end-to-end black-box nature, this architecture has several limitations as discussed below.
Inconsistency of Fusion Weights. First, consider the situation in which there are N input features f1, f2, · · · , fN with the corresponding feature-level fusion weights fw1, fw2, · · · , fwN . As in Fig. 1, the feature-level fusion weights are produced by the “FC-con” layer based on fused information from all inputs. As a result, an extracted fusion weight might not fully correspond to the corresponding feature due to the information sharing among all features. As we have observed in our experimental studies, there exist cases where the feature with the largest extracted fusion weight does not represent the most critical feature for the learning task. While the ranking of the featurelevel weights may reflect the relative importance of the corresponding features to a certain degree, the association between the two is not always consistent. In this paper, we refer to this phenomenon as inconsistency of fusion weights. It can be well expected that inconsistency of fusion weights may adversely affect the overall prediction accuracy, particularly when the fusion weights for certain noisy or corrupted features are not robustly learned by the network, resulting misleadingly large fusion weight values.
Potential Over-fitting. Furthermore, for applications where many features need to be fused, using the same number fusion weight values introduces many additional parameters that shall be learned properly in the training process, making over-fitting of the model easier to occur. This situation further exacerbates due to the potential occurrence of inconsistency of fusion weights.
Lack of Additional Fusion Mechanisms. Finally, in the architecture of Fig. 1, apart from the learning of fusion weights, fusion of raw input features is done in a simplistic manner, i.e. by the last fully connected layer “FC-out”. Nevertheless, there exist more powerful raw input fusion mechanisms which could potentially lead to additional performance improvements.
We address the above limitations of the baseline netgated architectures by proposing two extensions: a coarser-grained architecture and a hierarchical two-stage architecture, referred to as the FeatureGroup Gated Fusion Architecture (FG-GFA) and the Two-Stage Gated Fusion Architecture (2SGFA), respectively, described in the following sections.
3 FEATURE-GROUP GATED FUSION ARCHITECTURE (FG-GFA)
To address the aforementioned limitations of the baseline netgated architecture, we first explore the coarser-grained architecture, namely, Feature-Group Gated Fusion Architecture (FG-GFA) as in Fig. 2, where for illustration purpose two feature groups are shown with the first group having three features and the second group two features. In general, a given set of N input features f1, f2, · · · , fN may be partitioned into, say M , feature groups FG1, FG2, · · · , FGM . As one specific example of this architecture, all features in a feature group are concatenated first, and then passed onto a convolution layer and a pooling layer. After going through the corresponding FC layer (“FC-g1” or “FC-g2” in Fig. 2), the processed information from all groups are concatenated and then passed onto an FC layer (“FC-con” in Fig. 2) whose outputs are split into M group-level fusion weights. The fused information of each group e.g. ‘FC-g1” or “FC-g2” in Fig. 2, is multiplied by the corresponding group-level fusion weight (“FC-g1” or “FC-g2” are again duplicated in Fig. 2 to better illustrate the information flow). All weighted group-level information is combined and then processed by the final FC layer (“FC-out” in Fig. 2) to produce the prediction decision. The configuration and the number of layers at each key step of the FG-GFA may be chosen differently from the specific example of Fig. 2.
We now comment on the key differences between the FG-GFA architecture and the baseline netgated architecture. First of all, in addition to the final fusion operation of the “FC-out” in Fig. 2, we have performed additional early fusion of sensory inputs within each group. The outputs of such within-group fusions are combined to produce a smaller number of group-level fusion weights. Furthermore, the extracted group-level weights are used to multiply the corresponding fused group feature information, not the individual feature information. These characteristics of FG-GFA introduce different types of fusion mechanisms into the network. Second, since now fusion weights are extracted only at the group-level, fewer weights need to be learned compared with the baseline architecture. The fact that there are now a less number of tuning parameters and that early fusion takes place within each group might reduce the likelihood of stucking the training process on local minima. As a result, it may be possible to mitigate the issues of inconsistency of fusion weights and potential over-fitting. As will be demonstrated in the experimental study, FG-GFA leads to significantly more robust learning of fusion weights at the group level, i.e. existence of noisy or corrupted features in a group can be much more reliably reflected in the corresponding reduction of the grouplevel fusion weight. As a result, group-level fusion weights become a more reliable indicator of the quality of the sensory inputs in each group. Accordingly, we have empirically observed improved performance brought by FG-GFA as demonstrated later.
4 THE PROPOSED TWO-STAGE GATED FUSION ARCHITECTURE (2S-GFA)
In this hierarchical fusion architecture, we combine the baseline netgated architecture that leans the feature-level fusion weights and the proposed feature-group gated fusion architecture (FG-GFA) that extracts group-level fusion weights into two stages. The 2S-GFA architecture is illustrated in Fig. 3 where the first three features are in group 1 and the remaining two features are in group 2.
The upper portion of the network extracts five feature-level fusion weights based on splitting the outputs of the FC layer “FC-con”, in the same way as in the baseline netgated architecture. The smaller sub-network at the bottom of Fig. 3 reuses outputs from the first stage of conv layers on the top of the figure that pre-process all sensory inputs individually. Then it concatenates the preprocessed feature information within each group. It produces two group-level fusion weights by splitting the outputs of the FC layer “FC-con-g”, shown by the red and yellow squares for the two groups, respectively. For each feature input, the product of its feature-level fusion weight and the group-level fusion weight defines its final feature weight, which is used to multiply the processed feature information, e.g. “FC-f1”” to “FC-f5” in Fig. 3. Then, all weighted feature information are fused together by the FC layer “FC-out” which produces the final decision.
Since 2S-GFA integrates the essential ingredients of the baseline netgated architecture and the group-based FG-GFA architecture, it improves over both of the two architectures. Note again that the final fusion weight employed for each feature is the product of the feature-level weight and the corresponding group-level weight. As a result, the final fusion weight combines the key information extracted from feature-based and group-based processing. Each group-level fusion weight can be reliably learned as in the FG-GFA architecture, as a result, the final fusion weight can more reliably reflect the importance of the corresponding feature with respect to the learning task at hand, and serves an effective gating mechanism for the feature. For example, as we have observed in our experimental study, the feature-level fusion weight for a noisy or corrupted sensory input may not fully reflect the degraded importance of that feature as in the case of the baseline architecture. The more reliable group-level fusion weight, however, can block this feature, i.e. by making the final feature weight (product of the feature-level and group-level weights) small. This property mitigates the issue of inconsistency of fusion weights of the baseline architecture.
On the other hand, compared with the FG-GFA architecture, 2S-GFA further leverages the information revealed by the feature-level fusion weights. Therefore, each sensory input can be gated at a finer granularity based on the feature-level fusion weight. As such, it may be expected that the 2S-GFA represents an optimized middle ground between the baseline netgated architecture and the coarser-grained FG-GPA architecture and that it can learn the structure of underlying data more effectively, as we will demonstrate experimentally.
5 EXPERIMENTAL SETTINGS
To validate the proposed FG-GFA and 2S-GFA architectures and compare them with the conventional non-gated and the baseline netgated architectures, we consider two applications: driving model prediction and human activity recognition on smartphones.
5.1 DATASETS, SETUPS FOR DATASETS, AND TOOL FLOW
Dataset for driving model prediction. We consider three driving modes: idle, eco, and normal. The idle mode corresponds to the situation in which the vehicle’s engine is on but the vehicle is not in motion. The vehicle is in the eco mode when the car is being driven for fuel efficiency. All other situations are labeled as the normal mode. The target is to predict the vehicle’s driving mode at the next time period given all the sensory inputs that have been available. We treat this application as a time-series prediction problem.
We have driven a 2014 Nissan Sentra for two months between two GPS coordinates to collect this driving dataset. The RPM and speed data are extracted from the on-board diagnostics (OBD2) in the vehicle. We use the y-axis reading of a gyroscope (GYRO Y) for measuring lateral acceleration, and the difference between the GPS headings in time (D HEADING) for the steering angle of a vehicle. All sensor data used in the driving data set are collected from Freematics ONE+ dongle(Huang, 2017). Due to the different extraction times periods of sensors in the Freematics ONE+ dongle, linear interpolation in time is used for data pre-processing. In total five types of sensory data are used for training the neural network: RPM, SPEED, acceleration and deceleration(D SPEED), y axis of the gyroscope(GYRO Y), and difference of GPS headings in time (D HEADING). These five features are sampled every 250ms. To predict the driving mode of the vehicle 5 seconds in the future, we use ten seconds of data (40 points) from each feature, normalize the collected data feature-wise, and concatenate them to form a feature vector window. The feature vector window slides every 250ms as the prediction proceeds in time. For the proposed FP-GFA and 2S-GFA architectures, RPM, SPEED, and D SPEED are included in the first group while the second group is composed of GYRO Y, and D HEADING. 5,845 examples are used for training, and 2,891 samples are used for testing We train different neural networks using 50,000 iterations.
Dataset for human activity recognition. We further consider the public-domain human activity recognition dataset(Anguita et al., 2013) which has six input features: three axes of an accelerometer (ACC X, ACC Y, ACC Z), and three axes of a gyroscope (GYRO X, GYRO Y, GYRO Z), and six activity classes: WALKING, WALKING UPSTAIRS, WALKING DOWNSTAIRS, SITTING, STANDING, and LAYING. These six features are sampled every 200ms. The same sliding window scheme is used to define input feature vectors. For the proposed group-level and two-stage gated architectures, the six features are split into two different groups. The first feature group has ACC X and ACC Y while ACC Z. GYRO X, GYRO Y, and GYRO Z are included in the second group. The training and test sets have 2,410 and 942 examples, respectively. 100,000 iterations are used for training various neural networks.
Adopted tool flow. The adopted simulation environment is based on Tensorflow 1.10.1(Abadi et al., 2015), a open source deep neural network library in Python.
5.2 CONFIGURATIONS OF FOUR COMPARED NEURAL NETWORKS
We compare the conventional CNN architecture, the baseline netgated architecture Patel et al. (2017), the proposed feature-group gated fusion architecture (FG-GFA), and the proposed two-stage gated fusion architecture (2S-GFA) by creating four corresponding neural network models. For fair comparison, we match the number of neurons and layers used in the processing common to all four networks as much as possible. Nevertheless, it shall be noted that compared with the CNN model, the netgated architecture has additional neurons for extracting feature-level fusion weights, FG-GFA employs additional neurons for extracting group-level fusion weights, and 2S-GFA employs both the feature and group level fusion weights. The configurations of the four neural networks are detailed in the Appendix.
5.3 SENSOR NOISE AND FAILURES
To more comprehensively evaluate the performance of different architectures, we employ the original data of the two adopted datasets, which are called “clean”, and also create variations of the datasets by introducing sensory noise and failures for both the training and testing subsets of the data.
We mimic the degradation of sensory inputs by adding random Gaussian noise to all components of the feature vector vn = vori(1 + γε), (1) where vori and vn are the original and noisy component value, respectively, ε is a random variable following the normal distribution ∼ N(0, 1), and γ controls the percentage of the added noise and is set at different levels: 5%, 10%, and 20%.
In addition, we modify the original datasets by mimicking occurrence of more catastrophic sensor failures during both training and testing. Here, at each time stamp, one feature is selected at random and the corresponding value of that feature is wiped out to zero in the feature vector.
6 EXPERIMENTAL RESULTS
We evaluate the performance of different architectures using our driving mode prediction dataset and the public-domain human activity recognition dataset(Anguita et al., 2013) based on the settings described in the previous section.
6.1 RESULTS ON DRIVING MODE PREDICTION
Predication accuracy with clean data. Fig. 4 shows that the proposed two-stage architecture has the best prediction accuracy and the group-level FG-GFA architecture produces the second best performance when the data has no additional noise or failures. The two proposed architectures significantly improve over the conventional (non-gated) CNN architecture and also lead to noticeable improvements over the baseline netgated architecture.
It is interesting to observe that while not producing the best performance for the testing set, the baseline netgated architecture has a loss less than those of the group-level and two-stage architectures for the training set as in Table 1, suggesting possible occurrence of over-fitting in the netgated architecture as discussed previously.
Predication accuracy with noise or sensory failures. To verify the robustness of the networks, we test four neural network models when different levels of Gaussian noises are introduced to the training and test data set. In Table. 2, the proposed two architectures produce robust performances with the two stage architecture being the best under all cases.
In Table. 3, we compare four different models under the introduction of random sensor failures. Since random failures are more catastrophic compared to Gaussian noise, the overall performances of all models drop in this case. Nevertheless, the proposed two models show the best performances and the two-stage architecture outperforms the baseline netgated architecture by nearly 3%.
Noise level Non-NetGated NetGated Group-level Gate Two-stage
Analysis of performance improvements of the proposed architectures. We provide additional insights on the possible causes for the observed performance improvements made by the proposed two-stage architecture. In our setup, the three essential features for driving mode prediction are RPM, speed, and D SPEED, which are included in group 1. Table. 4 shows the feature and group level fusion weights of the two-stage architecture based on the clean data. We add 20% Gaussian noise to RPM in the group1 and report the updated fusion weights in Table.5. It can be seen that the feature-level fusion weight of RPM drops rather noticeably, possibly reflecting the degraded quality of this sensory input.
In a different experiment, we only add 20% noise to D heading, which is a feature in the group2. As shown in Table.6, the feature-level fusion weight of D heading and the group-level fusion weight of the second group both drop in comparison to the case of the clean data. It is expected that the reduced weights will reduce the contribution of D heading to the final prediction decision.
6.2 HUMAN ACTIVITY RECOGNITION DATA SET
We adopt a similar approach to demonstrate the performances of various models on the human activity recognition data set.
Predication accuracy with clean data. Fig. 5 summarizes the performances of the four models based on the clean data. Again the two-stage 2S-GFA architecture produces the best performance and the proposed group-level architecture FG-GFA architecture produces the second best performance among the four models.
Table 5: Fusion weights of two-stage architecture under with 20% noise in RPM in the driving mode data.
RPM SPEED D SPEED GYRO Y D HEADING Feature-level Fusion Weight 0.15 0.22 0.18 0.27 0.18 Group-level Fusion Weight 0.57 0.43
Table 6: Fusion Weight Analysis with 20% of Noise in D HEADING using two-stage architecture.
RPM SPEED D SPEED GYRO Y D HEADING Feature-level Fusion Weight 0.29 0.30 0.1 0.16 0.14 Group-level Fusion Weight 0.77 0.23
Predication accuracy with noise or sensory failures. Table. 8 shows with increasing Gaussian noise, the prediction accuracy of all models drops. However, the robustness and improvements of the two proposed architectures over the other two models are clearly observable. Table. 7 summarizes the results when sensor failures are introduced. In this case, the accuracy of the Nonnetgated network model (conventional CNN) drops by 10%. Still, the two proposed architectures demonstrate improved robustness over the conventional CNN and the baseline netgated architectures. Specifically, for this more challenging test case, the two-stage gated architecture outperforms the non-netgated model by 5% and netgated model by 3%.
Noise level Non-NetGated NetGated Group-level Gate Two-stage
7 CONCLUSION
This paper proposes two optimized gated deep learning architectures based on CNNs for sensor fusion: a coarser-grained gated architecture with (feature) group-level fusion weights and a two-stage architecture with a combination of feature-level and group-level fusion weights. It has been shown that the proposed architectures outperform the conventional CNN architecture and the existing NetGated architecture under various settings. Especially, the proposed architectures demonstrate larger improvements in the presence of random sensor noise and failures. Our future work will extend the proposed architectures for more complex sensor applications which may include additional sensing modalities such as cameras and Lidars.
A.1.2 NETGATED ARCHITECTURE
A.1.1 NON-NETGATED ARCHITECTURE
A.1 NEURAL NETWORK ARCHITECTURES USED FOR DRIVING MODEL PREDICTION
A APPENDICES
A.1.4 THE PROPOSED TWO-STAGE GATED FUSION ARCHITECTURE
A.1.3 THE PROPOSED FEATURE-GROUP GATED FUSION ARCHITECTURE
A.2.2 THE PROPOSED FEATURE-GROUP GATED FUSION ARCHITECTURE
A.2.1 NETGATED ARCHITECTURE
A.2 NEURAL NETWORK ARCHITECTURES USED FOR HUMAN ACTIVITY RECOGNITION
A.2.3 THE PROPOSED TWO-STAGE GATED FUSION ARCHITECTURE | 1. What are the strengths and weaknesses of the proposed gated deep learning architectures for sensor fusion?
2. How does the reviewer assess the technical accuracy of the paper, particularly regarding the grouped features and their effectiveness?
3. What are the limitations of the experimental setup and dataset choice?
4. Are there any inconsistencies or contradictions in the presented results?
5. How does the reviewer evaluate the adequacy of citations and literature coverage in the paper?
6. Overall, what is the reviewer's opinion of the paper's quality and suitability for publication in a top-tier conference like ICLR? | Review | Review
This paper proposes two gated deep learning architectures for sensor fusion. They are all based on the previous work
Naman Patel et al's modality fusion with CNNs for UGV autonomous driving in indoor environments (IROS). By having the grouped features, the author demonstrated improved performance, especially in the presence of random sensor noise and failures.
#Organization/Style:
The paper is well written, organized, and clear on most points. A few minor points:
1) The total length of the paper exceeds 8 pages. Some figures and tables should be adjusted to have it fit into 8 pages.
2) The literature review is limited.
3) There are clearly some misspellings. For example, the "netgated" is often written as "negated".
#Technical Accuracy:
The two architecture that the author proposes all based on the grouped features, which to my point of view, is a very important and necessary part of the new model. However, the author failed to rigorously prove or clearly demonstrated that why this is effective to our new model. Moreover, how to make groups or how many groups are needed are not clearly specified. The experiments used only two completely different datasets, none of them are related to the previous sensor fusion method they are trying to compete. I'm afraid this method cannot generalize to a common case.
In addition, if we look at Table 4 and Table 5, we can find the first Group-level Fusion Weight actually increases, which seems contradictory to the result shown in Table 6.
#Adequacy of Citations:
Poor coverage of literature in sensor fusion. There are less than 10 references are related to sensor fusion.
Overall, it is not an ICLR standard paper. |
ICLR | Title
Learning a neural response metric for retinal prosthesis
Abstract
Retinal prostheses for treating incurable blindness are designed to electrically stimulate surviving retinal neurons, causing them to send artificial visual signals to the brain. However, electrical stimulation generally cannot precisely reproduce typical patterns of neural activity in the retina. Therefore, an electrical stimulus must be selected so as to produce a neural response as close as possible to the desired response. This requires a technique for computing the distance between a desired response and an achievable response that is meaningful in terms of the visual signal being conveyed. We propose a method to learn a metric on neural responses directly from recorded light responses of a population of retinal ganglion cells (RGCs) in the primate retina. The learned metric produces a measure of similarity of RGC population responses that accurately reflects the similarity of visual inputs. Using data from electrical stimulation experiments, we demonstrate that the learned metric could produce improvements in the performance of a retinal prosthesis.
1 INTRODUCTION
An important application of neuroscience research is the development of electronic devices to replace the function of diseased or damaged neural circuits (Wilson et al., 1991; Schwartz, 2004). Artificial vision has been a particularly challenging modality due to the richness of visual information, its diverse uses in perception and behavior, and the complexity of fabricating a device that can interface effectively with neural circuitry (Stingl et al., 2013; Wilke et al., 2011; Jepson et al., 2014a).
The most advanced example is a retinal prosthesis: a device that replaces the function of neural circuitry in the retina lost to degenerative disease. Most of the computational work related to this application has focused on building encoding models that use the visual image to accurately predict the spiking activity of populations of retinal ganglion cells (RGCs), the output neurons of the retina that convey visual information to the brain. Leading models include linear models (Chichilnisky, 2001), probabilistic point-process models (Pillow et al., 2008) and recently proposed models employing rich nonlinearities (McIntosh et al.; Batty et al.; Shah et al., 2017).
However, an accurate encoding model, although valuable, is insufficient. Any retinal prosthesis – whether based on electrical stimulation (Sekirnjak et al., 2008) or optical stimulation (Boyden et al., 2005; Bernstein et al., 2008) – is limited in its ability to create arbitrary desired patterns of neural activity, due to inefficiencies or lack of specificity in the stimulation modality (Barrett et al., 2014; Jepson et al., 2014a). Thus, a given stimulation system can only achieve a limited vocabulary of elicited spike patterns. Although a powerful and accurate encoding model might indicate that a particular spike pattern would be the natural biological response to the incident visual stimulus, the desired spike pattern might not reside within the feasible set of the stimulation device (Figure 1).
Previous studies (Jepson et al., 2014b) have addressed this problem by selecting the electrical stimulation which minimizes the number of unmatched spikes across cells – equivalent to the Hamming distance between two binary vectors. Even though a Hamming distance is easy to compute, this solution is not necessarily optimal. The goal of a prosthetics device should be to instead select an
electrical stimulation pattern that produces a response as close as possible to the desired pattern of activity in terms of the elicited visual sensation (Figure 1C). In lieu of measuring the visual sensation produced by a prosthetic, we instead posit that one may infer a distance metric based on the signal and noise properties of individual and populations of neurons (Shlens et al., 2009; Pillow et al., 2008; Field & Chichilnisky, 2007). In contrast, previous approaches to spike metrics have focused on user-specified, parameteric functions (Victor & Purpura, 1996; 1997; Victor, 2005) or unsupervised techniques to cluster nearby spike patterns (van Rossum, 2001; Dubbs et al., 2010; Ganmor et al., 2015).
In this work, we propose a neural response metric learned directly from the statistics and structure of firing patterns in neural populations, with the aim of using it to select optimal electrical stimulation patterns in a prosthesis device. In particular, we learn a neural response metric by applying ideas from metric learning to recordings of RGC populations in non-human primate retina. We demonstrate that the learned metric provides an intuitive, meaningful representation of the similarity between spike patterns in the RGC population, capturing the statistics of neural responses as well as similarity between visual images. Finally, we use this metric to select the optimal electrical stimulation pattern within the constraints of the electrical interface to a population of RGCs.
2 METRIC AND SIMILARITY LEARNING
In this section we describe the algorithmic framework for learning pseudometrics or similarity measures in neural response space. We start by introducing notations and conventions that we use throughout the paper. We use bold face letters to denote vectors and upper case letters to denote matrices. We denote the symmetrization operator of a square matrix M by sym(M) = 12 (M +M >).
A single frame of visual stimulus, s, is an image represented as an n × n matrix. The space of possible stimuli is S ⊂ Rn×n. A sequence sl, . . . , sm of m− l+1 frames, where sj ∈ S , is denoted as sl:m. In order to simplify our notation, we define the responses of the cells to be a p dimensional vector and the space of possible responses as r ⊆ Rp. Analogously, a sequence of cell activities rt for t = l, . . . ,m is denoted rl:m. To simplify the presentation below, we confine the visual stimulus to be a single image and the corresponding response of each cell to be a scalar.
2.1 RELATED WORK
Metric and similarity learning using constrative loss (Chopra et al., 2005; Hadsell et al., 2006) and triplets loss (Shalev-Shwartz et al., 2004; Weinberger & Saul, 2009) have been used extensively in several domains. In computer vision, these methods achieve state-of-the-art performance on face recognition (Schroff et al., 2015; Sun et al., 2014; Taigman et al., 2014) and image retrieval (Wang et al., 2014). A central theme of this work has focused on improving metric learning by mining semi-hard negatives (Schroff et al., 2015). Because many negatives provide minimal information, these methods use a partially learned metric to identify negatives that may maximally improve the quality of the metric given a fixed number of updates. To avoid the computational burden imposed by such methods, some works have proposed alternative loss functions which either make efficient use of all the negatives in a batch (Oh Song et al., 2016) or multiple classes using n-tuplets (Sohn, 2016). Our method is similar to these methods as we make efficient use of all the negatives in a batch as in (Oh Song et al., 2016) but also use a simplified, softmax-based loss function (Sohn, 2016).
2.2 EMPIRICAL LOSS MINIMIZATION
Given the population response space R, we learn a function h : R × R → R which captures invariances in the spiking response when the same stimulus is presented multiple times. The scoring function h is viewed either as a similarity function or a pseudometric. To distinguish between the two cases, we use d(·, ·) to denote a pseudometric. A pseudometric d needs to satisfy:
Positivity. d(r1, r2) ≥ 0 and d(r, r) = 0 Sub-additivity. d(r1, r2) + d(r2, r3) ≥ d(r1, r3) Symmetry. d(r1, r2) = d(r2, r1)
During the experiments, repeats of the same sequence of visual stimuli are presented. The responses collected during the ith presentation (repeat) of visual stimulus are denoted by (sit, r i t). Here s i t is the stimulus history which elicits population response rit at time t. The goal of this approach is to learn a metric such that pairs of responses generated during different repeats of the same stimulus are closer, or more similar, than pairs of responses generated by different stimuli. We slice the data into triplets of the form (r, r+, r−) where r and r+ are responses of cells to the same stimulus while r− is the response to a different visual stimuli (Figure 2A). We refer to (r, r+) as a positive pair and (r, r−) as a negative pair (Figure 2B).
A common method to improve the learned metrics is to choose difficult negatives as described above. As it can be computationally demanding to mine hard negatives, we found that a much simpler
strategy of randomly sampling a common set of negatives for all the positive examples in the batch is effective. Hence we first sample positive pairs of responses corresponding to random stimulus times and a common set of negative responses generated by stimuli distinct from any stimulus for positive responses. Hence a batch of triplets is denoted by T = {{ri, ri+}, {rj−}}.
Given a training set of triplets T , the goal is to find a pseudometric such that for most (ri, ri+, {rj−}) ∈ T the distance between responses of two repeats of same stimulus is smaller than their distance to any of the irrelevant response vectors,
d(ri, ri+) < min j d(ri, rj−) (1)
We cast the learning task as empirical risk minimization of the form,
1 |T | ∑
(ri,ri+,{r j −})∈T
`(ri, ri+, {rj−}) ,
where `() is a differential, typically convex, relaxation of the ordering constraints from (1). We use the following,
`(ri, ri+, {rj−}) = β log 1 +∑ j e d(ri,ri+)−d(r i,r j −) β , as the surrogate loss. We set β = 10 in our implementation.
In the case of similarity learning, we swap the role of the pairs and define,
`(ri, ri+, {rj−}) = β log 1 +∑ j e h(ri,r j −)−h(r i,ri+) β , We implemented two parametric forms for distance and similarity functions. The first is a quadratic form where A 0 and
hA(r1, r2) = r > 1A r2 and dA(r1, r2) = (r1 − r2)>A (r1 − r2) . (2)
We learn the parameters by minimizing the loss using Adagrad (Duchi et al., 2011). We project A onto the space of positive semi-definite matrices space after every update using singular value decomposition. Concretely, we rewrite A as, UDU> where U is a unitary matrix and D is a diagonal matrix. We then threshold the diagonal elements of D to be non-negative.
2.3 EXTENDING METRIC SPACES FOR UNOBSERVED NEURAL POPULATIONS
The quadratic metric provides a good demonstration of the hypothesis that a learned metric space may be suitable. However, a quadratic metric is not feasible for a real prosthetic device because such a metric must be trained on visually-evoked spiking activity of a neural population. In a retinal prosthetic, such data are not available because the retina does not respond to light. Furthermore, a quadratic model contains limited modeling capacity to capture nonlinear visual processing in the retina (Field & Chichilnisky, 2007).
To address these issues, we introduce a nonlinear embedding based on a convolutional neural network (CNN). The CNN encodes each cell’s spiking responses in an embedding space grouped by cell type and cell body location before performing a series of nonlinear operations to map the response embedding from the response space R to Rp. One benefit of this approach is that this model has an embedding dimensionality independent of the number of cells recorded while only employing knowledge of the cell body location and cell type. The cell body location and cell type are identifiable from recordings of non-visually-evoked (spontaneous) neural activity in the retina (Li et al., 2015; Richard et al., 2015).
The resulting response metric may be generalized to blind retinas by merely providing cell center and cell type information. That is, no visually-evoked spiking activity is necessary to train an embedding for a new retina. Even though the model may be fit on non visually-evoked spiking activity, this
model class is superior then the quadratic model when fit to a given retina. We discuss preliminary experiments for predicting the activity in unobserved retinas in the Discussion.
We reserve a complete discussion of model architecture and training procedure for the Appendix. In brief, we employ a hierarchical, convolutional network topology to mirror the translation invariance expected in the receptive field organization of the retina. The convolutional network consists of 595K parameters across 7 layers and employs batch normalization to accelerate training. Let φ(r) be the convolutional embedding of responses. The similarity and metric learned using the convolutional network is given as -
hφ(r1, r2) = φ(r1) · φ(r2) and dφ(r1, r2) = ‖φ(r1)− φ(r2)‖2 . (3)
We learn the parameters by minimizing the loss using Adam (Kingma & Ba, 2014).
3 RESULTS
3.1 EXPERIMENTAL SETUP
Spiking responses from hundreds of retinal ganglion cells (RGCs) in primate retina were recorded using a 512 electrode array system (Litke et al., 2004; Frechette et al., 2005). ON and OFF parasol RGC types were identified using visual stimulation with binary white noise and reverse correlation (Chichilnisky, 2001).
Since each analysis requires different stimulus conditions and numbers of cells, we leave the details of each preparation to the subsequent sections. For each analysis, spike trains were discretized at the 120 Hz frame rate of the display (bins of 8.33ms), and responses across all the cells for 1 time bin were used to generate each training example.
In the following sections, we quantitatively assess the quality of multiple learned metrics – each metric with increasing complexity – with respect to a baseline (Hamming distance). First, we assess the quality of the learned metrics with respect to traditional error analysis. Second, we assess the quality of the learned embeddings with respect to optimal decoding of the stimulus. Finally, we demonstrate the utility of the learned metric by employing the metric in a real electrical stimulation experiment.
3.2 QUANTITATIVE EVALUATION OF LEARNED METRIC SPACE.
The quality of a metric in our context can be measured by its effectiveness for determining whether a pair of firing patterns arose from the same visual stimulus or from distinct visual stimuli. To evaluate the metric at the scale of large RGC populations, we focus our analysis on responses of a collection of 36 OFF parasol cells and 30 ON parasol cells to 99 repeats of a 10 second long white noise stimulus clip. The responses were partitioned into training (first 8 seconds) and testing (last 2 seconds) of each trial.
We assessed a range of learned embedding models and baselines by employing receiver-operating characteristic (ROC) analysis. Specifically, we selected the population firing pattern, r, at a particular offset in time in the experiment (corresponding to a visual stimulus history) and compared this firing pattern to two other types of firing patterns: (1) the firing pattern from the same group of cells at the same time during a second repeated presentation of the stimulus, r+; and (2) the firing pattern at a distinct, randomly selected time point, r−. For a given threshold, if the metric results in a correct classification of r+ as the same stimulus, we termed the result a true positive. For the same threshold, if an embedding metric incorrectly classified r− as the same stimulus, we termed it a false positive. Note that unlike training, we do not choose a common set of negatives for testing.
Figure 3A traces out the trade-off between the false positive rate and true positive rate across a range of thresholds in an assortment of embedding models for neural population activity. Better models trace out curves that bend to the upper-left of the figure. The line of equality indicates a model that is performing at chance. A simple baseline model of a Hamming distance (red curve) performs least accurately. A quadratic metric which permits variable weight for each neuron and interaction between pairs of neurons improves the performance further (green curve). Finally, replacing a quadratic metric with a euclidean distance between embedding of responses using a convolutional neural network improves the performance further (blue curve).
The ROC analysis provides strong evidence that increasingly sophisticated embedding models learn global structure above and beyond a Hamming distance metric. We also examined how the local structure of the space is captured by the embedding metric by calculating the learned embeddings on a test dataset consisting of 99 repeats each of the 10 different visual stimuli. We randomly selected a firing pattern r from one presentation of the stimulus, and identified k nearest neighbors according to our metric, for increasing k. Among the k nearest neighbors, we assessed precision, i.e. what fraction of the nearest neighbors correspond to 98 other presentations of the same stimulus. A perfect learned embedding model would achieve a precision of 1 for k ≤ 98 and 98/k otherwise (Figure 3B, dashed). We also measured recall, i.e. what fraction of the remaining 98 presentations of the same stimulus are within the k nearest neighbors. A perfect learned embedding model would achieve recall of k/98 for k ≤ 98 and 1 otherwise (Figure 3B, dashed). Figure 3B highlights the performance of various learned methods across increasing k. The results indicate that the precision and recall are below an optimal embedding, but the convolutional metric performs better than quadratic and Hamming metrics.
To visualize the discriminability of the response metric, we embed the 99 responses to 10 distinct stimuli using t-SNE (Maaten & Hinton, 2008) with distances estimated using the convolutional metric. We see in Figure 3C that responses corresponding to same visual stimulus (same color) cluster in the same region of embedding space reflecting the ability of the response space metric to discriminate distinct stimuli.
3.3 LEARNED METRIC CAPTURES STIMULUS SIMILARITY.
Although we trained the metric only based on whether pairs of responses are generated by the same stimulus, Figure 3C suggests that the learned response metric provides additional discriminative stimulus information. In the following sections, we attempt to quantitatively measure how well the response metric captures stimulus information by performing stimulus reconstruction. Our hypothesis is that stimulus reconstruction provides a proxy for the ultimate goal of assessing perceptual similarity.
Stimulus reconstruction has a rich history in the neural coding literature and presents significant technical challenges. To simplify the problem, we focus on linear reconstruction (Bialek et al., 1991; Rieke et al.; Roddey & Jacobs, 1996) because the objective is clear, the problem is convex and the resulting reconstruction is information rich (Stanley et al., 1999; Berry et al., 1997). One limitation of this approach is that linear reconstruction does not capture rich nonlinearities potentially present in encoding. For this reason, we focus subsequent analysis on the quadratic and Hamming metrics
and reserve the analysis of the nonlinear embedding for future work with nonlinear reconstruction techniques (see Discussion).
A technical issue that arises in the context of metric space analysis is the infeasibility of computing the embeddings for all spike patterns across large numbers of cells (e.g. 66 cells in the data of Figure 3 produces 266 responses). Therefore we focus on a spatially localized and overlapping population of 13 RGCs (6 ON and 7 OFF parasol cells in Figure 1B) because we can explicitly list all the 213 possible response patterns. Training data was accrued from RGC responses to 5 repeated presentations of a 5 minute long white noise sequence. The first 4 minutes of each presentation was employed for training; the last minute was employed for testing.
We examined the similarity between the decoded stimulus and the target stimulus, for responses that, according to our learned quadratic metric, are increasingly distant from the target. Figure 4A (first column, third row) shows the spatial profile of the linearly decoded target response 1.
We next calculate the distance of this target firing pattern to all 213 firing patterns and rank order them based on the learned metric. Figure 4A, top rows, shows firing patterns at the 1%, 2.5%, 5% and 75% percentiles. Below these firing patterns are the associated with linearly decoded stimuli, and the errors with respect to the target firing pattern. As we choose patterns farther from the target in terms of our metric, the distance between the decoded stimulus for the chosen firing pattern and target firing pattern systematically increases.
We quantify this observation in Figure 4B by randomly selecting pairs of responses from the test data and calculating the optimal linearly decoded stimuli associated with them (see Methods). We then plot the mean squared error (MSE) between the linearly decoded stimuli against the normalized metric distance between the responses. The decoding error systematically increases as the metric distance between the corresponding responses increases, for both the learned quadratic metric (blue) as well the Hamming distance (green). However, the distances generated by Hamming distance are
1Note that the reconstruction is based on the static population response pattern. We remove the time dimension by approximating ON and OFF parasol cell responses with a temporal filter with identical (but with oppositely signed) filters. Subsequent analyses are performed by only decoding the temporally filtered stimulus. The temporally filtered stimulus s is decoded as s = Ar+ b , where parameters A, b are estimated from RGC recordings.
discrete and therefore provide a less granular representation of the decoding errors associated with the stimuli.
3.4 LEARNED RESPONSE METRIC MAY IMPROVE PERFORMANCE OF RETINAL PROSTHESIS.
Using recorded experimental data, we now show how response metrics could improve the function of retinal prostheses by selecting optimal electrical stimulation patterns. For a given target response, we use the learned quadratic metric to select the best electrical stimulation pattern, and evaluate the effectiveness of this approach by linearly decoding the stimulus from the elicited responses.
Calibration of RGC responses to electrical stimulation patterns was performed by repeating a given electrical stimulation pattern 25 times, at each of 40 current levels, in a random order. Due to the limited duration of experiments, we focused on stimulation patterns in which only one electrode was active. The data was spike sorted and spiking probability was computed for each cell by averaging across trials for each electrical stimulation pattern (Mena et al., 2017). For each cell and electrode, the probability of firing as a function of stimulation current was approximated with a sigmoid function.
Since the RGC response to electrical stimulation is probabilistic, we evaluate each stimulation pattern by the expected distance between the elicited responses and the target firing pattern. For a quadratic response metric this can be easily computed in closed form. Given a response metric, we rank different stimulation patterns based on the expected distance to the target firing pattern. In Figure 5A and B (first columns) we show example target response patterns and the corresponding linearly decoded visual stimulus. We then analyze the best stimulation pattern determined by the learned quadratic metric, and by the Hamming distance. The responses sampled from the response distributions for the selected stimulation patterns are shown in Figures 5A and B (second and third columns each). We find that the linearly decoded stimuli were closer to the target when the stimulation was chosen via the learned response metric compared to the Hamming distance.
To quantify this behavior, we calculated the mean squared error between the decoded stimuli when the stimulation was chosen using the learned metric and the Hamming distance (Figure 5C). The
learned metric and Hamming metric identify the same stimulation pattern and hence achieve the same error for 49% for the target responses observed. However, on 33% of the target responses, the learned metric achieves lower mean squared error than the Hamming distance; conversely, the learned metric has larger MSE then Hamming distance on 18% of the target responses.
The above analysis demonstrates the benefit of using the learned metric over Hamming distance to choose the best stimulation pattern. However, the collection of available electrical stimulation patterns might change over time due to hardware or biophysical constraints. To assess the improvement in such cases, we next ask how well the learned metric performs relative to Hamming distance if we choose the kth best current pattern using each metric. (Figure 5D). Increasing k for the learned metric leads to higher MSE in terms of the decoded stimulus. Importantly, the learned metric achieves systematically lower MSE than the Hamming distance across the nearest k ≤ 10 stimulation patterns. These results indicate that the learned metric systematically selects better electrical stimulation patterns for eliciting reasonably close firing patterns.
4 DISCUSSION
The learned metric approach has two major potential implications for visual neuroscience. First, it provides a novel method to find “symbols” in the neural code of the retina that are similar in the sense that they indicate the presence of similar stimuli (Ganmor et al., 2015). Second, it has an application to retinal prosthesis technology, in which hardware constraints demand that the set of neural responses that can be generated with a device be used to effectively transmit useful visual information. For this application, a metric on responses that reflects visual stimulus similarity could be extremely useful.
The present approach differs from previously proposed spike train metrics (reviewed in (Victor, 2005)). Previous approaches have employed unsupervised techniques to cluster nearby spike patterns (Ganmor et al., 2015; Prentice et al., 2016; Gardella et al., 2017) or employed user-specified, paramteric approaches (Victor & Purpura, 1997; Aronov et al., 2003). In the case of single snapshots in time used here, the latter approach (Victor-Purpura metric) has only one degree of freedom which is a user-specified cost associated with moving spikes from one cell to another. In our proposed method, the relative importance of cell identity is learned directly from the statistics of population firing patterns.
The present work is a stepping stone towards building an encoding algorithm for retinal prostheses. In this paper, we learn the metric using light evoked responses. However, we need to estimate this metric in a blind retina, which has no light evoked responses. The convolutional metric is adaptable to any RGC population by merely noting cell types and center locations. Thus a convolutional metric could be trained on multiple healthy retinas and applied to a blind retina. Preliminary results in this direction indicate that a convolutional metric trained on half of the cells in a retinal recording (training data) generalizes to the other half (validation data), yielding performance higher than a quadratic metric (and comparable to a convolutional metric) trained directly on the validation data.
Additional techniques may also be helpful in extending our method to data involving many cells, temporal responses, and additional response structure. For example, using recurrent neural networks (Lipton et al., 2015) to embed responses may help compute distances between spiking patterns consisting of multiple time bins, perhaps of unequal length. Boosting (Freund & Schapire, 1999) may help combine multiple efficiently learned metrics for a smaller, spatially localized groups of cells. Other metrics may be developed to capture invariances learned by commonly used encoding models (Chichilnisky, 2001; Pillow et al., 2008). Also, triplet mining techniques (i.e., choosing hard negatives), a commonly used trick in computer vision, may improve efficiency (Schroff et al., 2015; Oh Song et al., 2016). Novel metrics could also be learned with additional structure in population responses, such as the highly structured correlated activity in RGCs Mastronarde (1983); Greschner et al. (2011). This noise correlation structure may be learnable using negative examples that destroy the noise correlations in data while preserving light response properties, by taking responses of different cells from different repeats of the stimulus.
Note that the convolutional metric outperforms the quadratic metric at both global (ROC curves) and local (precision recall curves) scales. However, using current retinal prosthesis technologies, we might be able to resolve information only up to a particular scale. For current retinal prostheses,
capturing global structure may be of greatest importance, because state-of-the-art technology has a relatively coarse vocabulary for stimulating RGCs (Humayun et al., 2012; Zrenner et al., 2011) (see also Figure 1). Specifically, the “nearest” elicited firing pattern is “far” in terms of the corresponding visual stimulus (Figure 5) . In terms of the proposed learned metric, the nearest feasible firing pattern achievable by electrical stimulation in our experiments is at the 10th percentile of all possible firing patterns. In this context, the average closest stimulation pattern, expressed as a percentile of the learned metric distances, provides a valuable benchmark to measure the performance of a prosthesis and how that performance is affected by advances in the underlying hardware and software.
ACKNOWLEDGEMENTS
We thank Vineet Gupta for numerous helpful discussions. We thank Pawel Hottowy, Alexander Sher, Alan M. Litke, Alexandra Tikidji-Hamburyan, Georges Goetz, Nora Brackbill, Colleen Rhoades and Lauren Grosberg for help with experiments. Research funding provided by Internship program at Google Brain (NPS), DARPA Contract FA8650-16-1-765 (EJC).
A APPENDIX
A.1 DETAILS OF THE CONVOLUTIONAL METRIC
We build a hierarchical, convolutional network to mirror the translation invariance expected in the receptive field organization of the retina. The goal of this network is to flexibly capture population activity of ON and OFF cells but employ minimal knowledge about cell receptive fields. The reason for this approach is to build a model that may be amenable to a retinal prosthetic in which the characterization of individual retinal ganglion cells is limited (Jepson et al., 2014a;b).
In particular, the network employs knowledge of the receptive field locations and firing rates of individual cells but the network is independent of the number of cells in the retina. The latter point is achieved by embedding the responses of neurons into pathways grouped by cell type. In our experiments, we focus on 2 cell types (ON and OFF parasols), thus we employ a 2 channel pathway (Kandel et al., 2000).
The network receives as input the spiking activity of ON and OFF parasols and embeds these spike patterns as one-hot vectors placed at the spatial locations of each cell’s receptive field. The resulting pattern of activations is summed across all cells in the ON and OFF populations, respectively, and passed through several convolutional layers of a network. Successive layers shrink the spatial activation size of the representation, while increasing the number of filter channels (Krizhevsky et al., 2012; Simonyan & Zisserman, 2014). The final embedding response vector has 1/16th number of pixels in the stimulus and represents the flattened representation of the last layer of the network.
Let c denote the number of different cells. The RGC population response is a vector r ∈ {0, 1}c.
• Represent responses as vectors over {+1,−1} with r̃ = 2(r− 0.5). • Compute the scale for each cell as a function of the mean firing rate:
si = a0µ 3 i + a1µ 2 i + a2µ 3 i + a3 .
• Map each cell to its center location on a grid with spatial dimensions same as those of visual stimulus. Let Mi be grid embedding on cell i. So, Mi has zero for all positions except center of cell. • Perform a separable 5× 5 convolution of stride 1 on each Mi to get RF estimate of cell, M̃i. • Add the activation of cells of the same type to get the total activation for a given cell type.
Hence, activation map for each cell type Ai = ∑ i r̃isiM̃i. Subsequent layers receive input
as a two layered activation map corresponding to ON and OFF parasol cells. • The convolutional layers further combine information accross multiple cells, of different
types. The details of different layers are shown in Figure 6 and Table 1.
A.2 ACCURACY OF THE LINEAR DECODER
For the latter analyses assesing the quality of metric, we reconstruct the stimulus from neural responses with linear decoding. In this section we demonstrate that even though the linear decoder is rather simplistic, the reconstructions are on-par with a non-parametric decoding method which averages the stimulus corresponding to the response pattern. In Figure 7 A, we see that the linear decoder has very similar spatial structure to the non-parametric decoder. To quantify this, we compute the mean-squared error between the two methods of decoding, normalized by the magnitude of non-parametric decoder (Figure 7 B, blue dots). The error of linear decoding is comparable to error between two non-parametric decodings computed using independent samples of stimuli (Figure 7 B, green dots). These observations show that linear decoder is a reasonable first-order approximation of encoded stimulus. | 1. What is the focus and contribution of the paper regarding spike train distance metrics?
2. What are the strengths of the proposed approach, particularly in its ability to classify neural responses and capture the structure of the neural code?
3. What are the weaknesses of the paper, specifically regarding the execution of the ideas presented?
4. How does the reviewer suggest improving the paper, especially regarding the choice of metric and decoding errors?
5. What additional measurements or comparisons should be included in the paper to provide a more comprehensive analysis? | Review | Review
The authors develop new spike train distance metrics that cluster together responses to the same stimulus, and push responses to different stimuli away from each other. Two such metrics are discussed: neural networks, and quadratic metrics. They then show that these metrics can be used to classify neural responses as coming from the same vs different stimuli, and that they outperform the naive Hamming distance metric at this task. Moreover, they show that this metric implicitly captures some structure in the neural code: more similar responses correspond to more similar visual stimuli. Finally, they discuss the implications of their metric for retinal prosthesis, and show some (fairly preliminary) data for how it could be used.
Overall, I love the concepts in this paper. I have some reasonably substantive concerns over the execution, outlined below. But I encourage the authors to consider following through on these suggestions to improve their paper: the paper's key idea is really good, and I think it's worth the effort to flesh that idea out more thoroughly.
My specific suggestions / criticisms are:
1) The quadratic metric seems only marginally better than the Hamming one (especially in Figs. 3 and 4), whereas the neural nets do much better as a metric (Fig. 3). However, most of the analyses (Figs. 4,5) use the quadratic metric. Why not use the better neural network metric for the subsequent studies of image similarity, and retinal stimulation?
2) For Figs. 4, 5, where you use linear decoders to test the stimuli corresponding to the neural responses, how good are those decoders (i.e., MSE between decoded stim and true stim.)? If the decoders are poor, then the comparisons based on those decoders might not be so meaningful. I encourage you to report the decoding error, and if it's large, to make a better decoder and use it for these studies.
3) Similarly, for Fig. 4, why not measure the MSE between the actual image frames corresponding to these neural responses? Presumably, you have the image frames corresponding to the target response, and for each of the other responses shown (i.e., the responses at different distances from the target). This would avoid any complications from sub-optimal decoders, and be a much more direct test.
(I understand that, for Fig. 5, you can't do this direct comparison, as the electrically stimulated patterns don't have corresponding image frames, so you need to decode them.) |
ICLR | Title
Learning a neural response metric for retinal prosthesis
Abstract
Retinal prostheses for treating incurable blindness are designed to electrically stimulate surviving retinal neurons, causing them to send artificial visual signals to the brain. However, electrical stimulation generally cannot precisely reproduce typical patterns of neural activity in the retina. Therefore, an electrical stimulus must be selected so as to produce a neural response as close as possible to the desired response. This requires a technique for computing the distance between a desired response and an achievable response that is meaningful in terms of the visual signal being conveyed. We propose a method to learn a metric on neural responses directly from recorded light responses of a population of retinal ganglion cells (RGCs) in the primate retina. The learned metric produces a measure of similarity of RGC population responses that accurately reflects the similarity of visual inputs. Using data from electrical stimulation experiments, we demonstrate that the learned metric could produce improvements in the performance of a retinal prosthesis.
1 INTRODUCTION
An important application of neuroscience research is the development of electronic devices to replace the function of diseased or damaged neural circuits (Wilson et al., 1991; Schwartz, 2004). Artificial vision has been a particularly challenging modality due to the richness of visual information, its diverse uses in perception and behavior, and the complexity of fabricating a device that can interface effectively with neural circuitry (Stingl et al., 2013; Wilke et al., 2011; Jepson et al., 2014a).
The most advanced example is a retinal prosthesis: a device that replaces the function of neural circuitry in the retina lost to degenerative disease. Most of the computational work related to this application has focused on building encoding models that use the visual image to accurately predict the spiking activity of populations of retinal ganglion cells (RGCs), the output neurons of the retina that convey visual information to the brain. Leading models include linear models (Chichilnisky, 2001), probabilistic point-process models (Pillow et al., 2008) and recently proposed models employing rich nonlinearities (McIntosh et al.; Batty et al.; Shah et al., 2017).
However, an accurate encoding model, although valuable, is insufficient. Any retinal prosthesis – whether based on electrical stimulation (Sekirnjak et al., 2008) or optical stimulation (Boyden et al., 2005; Bernstein et al., 2008) – is limited in its ability to create arbitrary desired patterns of neural activity, due to inefficiencies or lack of specificity in the stimulation modality (Barrett et al., 2014; Jepson et al., 2014a). Thus, a given stimulation system can only achieve a limited vocabulary of elicited spike patterns. Although a powerful and accurate encoding model might indicate that a particular spike pattern would be the natural biological response to the incident visual stimulus, the desired spike pattern might not reside within the feasible set of the stimulation device (Figure 1).
Previous studies (Jepson et al., 2014b) have addressed this problem by selecting the electrical stimulation which minimizes the number of unmatched spikes across cells – equivalent to the Hamming distance between two binary vectors. Even though a Hamming distance is easy to compute, this solution is not necessarily optimal. The goal of a prosthetics device should be to instead select an
electrical stimulation pattern that produces a response as close as possible to the desired pattern of activity in terms of the elicited visual sensation (Figure 1C). In lieu of measuring the visual sensation produced by a prosthetic, we instead posit that one may infer a distance metric based on the signal and noise properties of individual and populations of neurons (Shlens et al., 2009; Pillow et al., 2008; Field & Chichilnisky, 2007). In contrast, previous approaches to spike metrics have focused on user-specified, parameteric functions (Victor & Purpura, 1996; 1997; Victor, 2005) or unsupervised techniques to cluster nearby spike patterns (van Rossum, 2001; Dubbs et al., 2010; Ganmor et al., 2015).
In this work, we propose a neural response metric learned directly from the statistics and structure of firing patterns in neural populations, with the aim of using it to select optimal electrical stimulation patterns in a prosthesis device. In particular, we learn a neural response metric by applying ideas from metric learning to recordings of RGC populations in non-human primate retina. We demonstrate that the learned metric provides an intuitive, meaningful representation of the similarity between spike patterns in the RGC population, capturing the statistics of neural responses as well as similarity between visual images. Finally, we use this metric to select the optimal electrical stimulation pattern within the constraints of the electrical interface to a population of RGCs.
2 METRIC AND SIMILARITY LEARNING
In this section we describe the algorithmic framework for learning pseudometrics or similarity measures in neural response space. We start by introducing notations and conventions that we use throughout the paper. We use bold face letters to denote vectors and upper case letters to denote matrices. We denote the symmetrization operator of a square matrix M by sym(M) = 12 (M +M >).
A single frame of visual stimulus, s, is an image represented as an n × n matrix. The space of possible stimuli is S ⊂ Rn×n. A sequence sl, . . . , sm of m− l+1 frames, where sj ∈ S , is denoted as sl:m. In order to simplify our notation, we define the responses of the cells to be a p dimensional vector and the space of possible responses as r ⊆ Rp. Analogously, a sequence of cell activities rt for t = l, . . . ,m is denoted rl:m. To simplify the presentation below, we confine the visual stimulus to be a single image and the corresponding response of each cell to be a scalar.
2.1 RELATED WORK
Metric and similarity learning using constrative loss (Chopra et al., 2005; Hadsell et al., 2006) and triplets loss (Shalev-Shwartz et al., 2004; Weinberger & Saul, 2009) have been used extensively in several domains. In computer vision, these methods achieve state-of-the-art performance on face recognition (Schroff et al., 2015; Sun et al., 2014; Taigman et al., 2014) and image retrieval (Wang et al., 2014). A central theme of this work has focused on improving metric learning by mining semi-hard negatives (Schroff et al., 2015). Because many negatives provide minimal information, these methods use a partially learned metric to identify negatives that may maximally improve the quality of the metric given a fixed number of updates. To avoid the computational burden imposed by such methods, some works have proposed alternative loss functions which either make efficient use of all the negatives in a batch (Oh Song et al., 2016) or multiple classes using n-tuplets (Sohn, 2016). Our method is similar to these methods as we make efficient use of all the negatives in a batch as in (Oh Song et al., 2016) but also use a simplified, softmax-based loss function (Sohn, 2016).
2.2 EMPIRICAL LOSS MINIMIZATION
Given the population response space R, we learn a function h : R × R → R which captures invariances in the spiking response when the same stimulus is presented multiple times. The scoring function h is viewed either as a similarity function or a pseudometric. To distinguish between the two cases, we use d(·, ·) to denote a pseudometric. A pseudometric d needs to satisfy:
Positivity. d(r1, r2) ≥ 0 and d(r, r) = 0 Sub-additivity. d(r1, r2) + d(r2, r3) ≥ d(r1, r3) Symmetry. d(r1, r2) = d(r2, r1)
During the experiments, repeats of the same sequence of visual stimuli are presented. The responses collected during the ith presentation (repeat) of visual stimulus are denoted by (sit, r i t). Here s i t is the stimulus history which elicits population response rit at time t. The goal of this approach is to learn a metric such that pairs of responses generated during different repeats of the same stimulus are closer, or more similar, than pairs of responses generated by different stimuli. We slice the data into triplets of the form (r, r+, r−) where r and r+ are responses of cells to the same stimulus while r− is the response to a different visual stimuli (Figure 2A). We refer to (r, r+) as a positive pair and (r, r−) as a negative pair (Figure 2B).
A common method to improve the learned metrics is to choose difficult negatives as described above. As it can be computationally demanding to mine hard negatives, we found that a much simpler
strategy of randomly sampling a common set of negatives for all the positive examples in the batch is effective. Hence we first sample positive pairs of responses corresponding to random stimulus times and a common set of negative responses generated by stimuli distinct from any stimulus for positive responses. Hence a batch of triplets is denoted by T = {{ri, ri+}, {rj−}}.
Given a training set of triplets T , the goal is to find a pseudometric such that for most (ri, ri+, {rj−}) ∈ T the distance between responses of two repeats of same stimulus is smaller than their distance to any of the irrelevant response vectors,
d(ri, ri+) < min j d(ri, rj−) (1)
We cast the learning task as empirical risk minimization of the form,
1 |T | ∑
(ri,ri+,{r j −})∈T
`(ri, ri+, {rj−}) ,
where `() is a differential, typically convex, relaxation of the ordering constraints from (1). We use the following,
`(ri, ri+, {rj−}) = β log 1 +∑ j e d(ri,ri+)−d(r i,r j −) β , as the surrogate loss. We set β = 10 in our implementation.
In the case of similarity learning, we swap the role of the pairs and define,
`(ri, ri+, {rj−}) = β log 1 +∑ j e h(ri,r j −)−h(r i,ri+) β , We implemented two parametric forms for distance and similarity functions. The first is a quadratic form where A 0 and
hA(r1, r2) = r > 1A r2 and dA(r1, r2) = (r1 − r2)>A (r1 − r2) . (2)
We learn the parameters by minimizing the loss using Adagrad (Duchi et al., 2011). We project A onto the space of positive semi-definite matrices space after every update using singular value decomposition. Concretely, we rewrite A as, UDU> where U is a unitary matrix and D is a diagonal matrix. We then threshold the diagonal elements of D to be non-negative.
2.3 EXTENDING METRIC SPACES FOR UNOBSERVED NEURAL POPULATIONS
The quadratic metric provides a good demonstration of the hypothesis that a learned metric space may be suitable. However, a quadratic metric is not feasible for a real prosthetic device because such a metric must be trained on visually-evoked spiking activity of a neural population. In a retinal prosthetic, such data are not available because the retina does not respond to light. Furthermore, a quadratic model contains limited modeling capacity to capture nonlinear visual processing in the retina (Field & Chichilnisky, 2007).
To address these issues, we introduce a nonlinear embedding based on a convolutional neural network (CNN). The CNN encodes each cell’s spiking responses in an embedding space grouped by cell type and cell body location before performing a series of nonlinear operations to map the response embedding from the response space R to Rp. One benefit of this approach is that this model has an embedding dimensionality independent of the number of cells recorded while only employing knowledge of the cell body location and cell type. The cell body location and cell type are identifiable from recordings of non-visually-evoked (spontaneous) neural activity in the retina (Li et al., 2015; Richard et al., 2015).
The resulting response metric may be generalized to blind retinas by merely providing cell center and cell type information. That is, no visually-evoked spiking activity is necessary to train an embedding for a new retina. Even though the model may be fit on non visually-evoked spiking activity, this
model class is superior then the quadratic model when fit to a given retina. We discuss preliminary experiments for predicting the activity in unobserved retinas in the Discussion.
We reserve a complete discussion of model architecture and training procedure for the Appendix. In brief, we employ a hierarchical, convolutional network topology to mirror the translation invariance expected in the receptive field organization of the retina. The convolutional network consists of 595K parameters across 7 layers and employs batch normalization to accelerate training. Let φ(r) be the convolutional embedding of responses. The similarity and metric learned using the convolutional network is given as -
hφ(r1, r2) = φ(r1) · φ(r2) and dφ(r1, r2) = ‖φ(r1)− φ(r2)‖2 . (3)
We learn the parameters by minimizing the loss using Adam (Kingma & Ba, 2014).
3 RESULTS
3.1 EXPERIMENTAL SETUP
Spiking responses from hundreds of retinal ganglion cells (RGCs) in primate retina were recorded using a 512 electrode array system (Litke et al., 2004; Frechette et al., 2005). ON and OFF parasol RGC types were identified using visual stimulation with binary white noise and reverse correlation (Chichilnisky, 2001).
Since each analysis requires different stimulus conditions and numbers of cells, we leave the details of each preparation to the subsequent sections. For each analysis, spike trains were discretized at the 120 Hz frame rate of the display (bins of 8.33ms), and responses across all the cells for 1 time bin were used to generate each training example.
In the following sections, we quantitatively assess the quality of multiple learned metrics – each metric with increasing complexity – with respect to a baseline (Hamming distance). First, we assess the quality of the learned metrics with respect to traditional error analysis. Second, we assess the quality of the learned embeddings with respect to optimal decoding of the stimulus. Finally, we demonstrate the utility of the learned metric by employing the metric in a real electrical stimulation experiment.
3.2 QUANTITATIVE EVALUATION OF LEARNED METRIC SPACE.
The quality of a metric in our context can be measured by its effectiveness for determining whether a pair of firing patterns arose from the same visual stimulus or from distinct visual stimuli. To evaluate the metric at the scale of large RGC populations, we focus our analysis on responses of a collection of 36 OFF parasol cells and 30 ON parasol cells to 99 repeats of a 10 second long white noise stimulus clip. The responses were partitioned into training (first 8 seconds) and testing (last 2 seconds) of each trial.
We assessed a range of learned embedding models and baselines by employing receiver-operating characteristic (ROC) analysis. Specifically, we selected the population firing pattern, r, at a particular offset in time in the experiment (corresponding to a visual stimulus history) and compared this firing pattern to two other types of firing patterns: (1) the firing pattern from the same group of cells at the same time during a second repeated presentation of the stimulus, r+; and (2) the firing pattern at a distinct, randomly selected time point, r−. For a given threshold, if the metric results in a correct classification of r+ as the same stimulus, we termed the result a true positive. For the same threshold, if an embedding metric incorrectly classified r− as the same stimulus, we termed it a false positive. Note that unlike training, we do not choose a common set of negatives for testing.
Figure 3A traces out the trade-off between the false positive rate and true positive rate across a range of thresholds in an assortment of embedding models for neural population activity. Better models trace out curves that bend to the upper-left of the figure. The line of equality indicates a model that is performing at chance. A simple baseline model of a Hamming distance (red curve) performs least accurately. A quadratic metric which permits variable weight for each neuron and interaction between pairs of neurons improves the performance further (green curve). Finally, replacing a quadratic metric with a euclidean distance between embedding of responses using a convolutional neural network improves the performance further (blue curve).
The ROC analysis provides strong evidence that increasingly sophisticated embedding models learn global structure above and beyond a Hamming distance metric. We also examined how the local structure of the space is captured by the embedding metric by calculating the learned embeddings on a test dataset consisting of 99 repeats each of the 10 different visual stimuli. We randomly selected a firing pattern r from one presentation of the stimulus, and identified k nearest neighbors according to our metric, for increasing k. Among the k nearest neighbors, we assessed precision, i.e. what fraction of the nearest neighbors correspond to 98 other presentations of the same stimulus. A perfect learned embedding model would achieve a precision of 1 for k ≤ 98 and 98/k otherwise (Figure 3B, dashed). We also measured recall, i.e. what fraction of the remaining 98 presentations of the same stimulus are within the k nearest neighbors. A perfect learned embedding model would achieve recall of k/98 for k ≤ 98 and 1 otherwise (Figure 3B, dashed). Figure 3B highlights the performance of various learned methods across increasing k. The results indicate that the precision and recall are below an optimal embedding, but the convolutional metric performs better than quadratic and Hamming metrics.
To visualize the discriminability of the response metric, we embed the 99 responses to 10 distinct stimuli using t-SNE (Maaten & Hinton, 2008) with distances estimated using the convolutional metric. We see in Figure 3C that responses corresponding to same visual stimulus (same color) cluster in the same region of embedding space reflecting the ability of the response space metric to discriminate distinct stimuli.
3.3 LEARNED METRIC CAPTURES STIMULUS SIMILARITY.
Although we trained the metric only based on whether pairs of responses are generated by the same stimulus, Figure 3C suggests that the learned response metric provides additional discriminative stimulus information. In the following sections, we attempt to quantitatively measure how well the response metric captures stimulus information by performing stimulus reconstruction. Our hypothesis is that stimulus reconstruction provides a proxy for the ultimate goal of assessing perceptual similarity.
Stimulus reconstruction has a rich history in the neural coding literature and presents significant technical challenges. To simplify the problem, we focus on linear reconstruction (Bialek et al., 1991; Rieke et al.; Roddey & Jacobs, 1996) because the objective is clear, the problem is convex and the resulting reconstruction is information rich (Stanley et al., 1999; Berry et al., 1997). One limitation of this approach is that linear reconstruction does not capture rich nonlinearities potentially present in encoding. For this reason, we focus subsequent analysis on the quadratic and Hamming metrics
and reserve the analysis of the nonlinear embedding for future work with nonlinear reconstruction techniques (see Discussion).
A technical issue that arises in the context of metric space analysis is the infeasibility of computing the embeddings for all spike patterns across large numbers of cells (e.g. 66 cells in the data of Figure 3 produces 266 responses). Therefore we focus on a spatially localized and overlapping population of 13 RGCs (6 ON and 7 OFF parasol cells in Figure 1B) because we can explicitly list all the 213 possible response patterns. Training data was accrued from RGC responses to 5 repeated presentations of a 5 minute long white noise sequence. The first 4 minutes of each presentation was employed for training; the last minute was employed for testing.
We examined the similarity between the decoded stimulus and the target stimulus, for responses that, according to our learned quadratic metric, are increasingly distant from the target. Figure 4A (first column, third row) shows the spatial profile of the linearly decoded target response 1.
We next calculate the distance of this target firing pattern to all 213 firing patterns and rank order them based on the learned metric. Figure 4A, top rows, shows firing patterns at the 1%, 2.5%, 5% and 75% percentiles. Below these firing patterns are the associated with linearly decoded stimuli, and the errors with respect to the target firing pattern. As we choose patterns farther from the target in terms of our metric, the distance between the decoded stimulus for the chosen firing pattern and target firing pattern systematically increases.
We quantify this observation in Figure 4B by randomly selecting pairs of responses from the test data and calculating the optimal linearly decoded stimuli associated with them (see Methods). We then plot the mean squared error (MSE) between the linearly decoded stimuli against the normalized metric distance between the responses. The decoding error systematically increases as the metric distance between the corresponding responses increases, for both the learned quadratic metric (blue) as well the Hamming distance (green). However, the distances generated by Hamming distance are
1Note that the reconstruction is based on the static population response pattern. We remove the time dimension by approximating ON and OFF parasol cell responses with a temporal filter with identical (but with oppositely signed) filters. Subsequent analyses are performed by only decoding the temporally filtered stimulus. The temporally filtered stimulus s is decoded as s = Ar+ b , where parameters A, b are estimated from RGC recordings.
discrete and therefore provide a less granular representation of the decoding errors associated with the stimuli.
3.4 LEARNED RESPONSE METRIC MAY IMPROVE PERFORMANCE OF RETINAL PROSTHESIS.
Using recorded experimental data, we now show how response metrics could improve the function of retinal prostheses by selecting optimal electrical stimulation patterns. For a given target response, we use the learned quadratic metric to select the best electrical stimulation pattern, and evaluate the effectiveness of this approach by linearly decoding the stimulus from the elicited responses.
Calibration of RGC responses to electrical stimulation patterns was performed by repeating a given electrical stimulation pattern 25 times, at each of 40 current levels, in a random order. Due to the limited duration of experiments, we focused on stimulation patterns in which only one electrode was active. The data was spike sorted and spiking probability was computed for each cell by averaging across trials for each electrical stimulation pattern (Mena et al., 2017). For each cell and electrode, the probability of firing as a function of stimulation current was approximated with a sigmoid function.
Since the RGC response to electrical stimulation is probabilistic, we evaluate each stimulation pattern by the expected distance between the elicited responses and the target firing pattern. For a quadratic response metric this can be easily computed in closed form. Given a response metric, we rank different stimulation patterns based on the expected distance to the target firing pattern. In Figure 5A and B (first columns) we show example target response patterns and the corresponding linearly decoded visual stimulus. We then analyze the best stimulation pattern determined by the learned quadratic metric, and by the Hamming distance. The responses sampled from the response distributions for the selected stimulation patterns are shown in Figures 5A and B (second and third columns each). We find that the linearly decoded stimuli were closer to the target when the stimulation was chosen via the learned response metric compared to the Hamming distance.
To quantify this behavior, we calculated the mean squared error between the decoded stimuli when the stimulation was chosen using the learned metric and the Hamming distance (Figure 5C). The
learned metric and Hamming metric identify the same stimulation pattern and hence achieve the same error for 49% for the target responses observed. However, on 33% of the target responses, the learned metric achieves lower mean squared error than the Hamming distance; conversely, the learned metric has larger MSE then Hamming distance on 18% of the target responses.
The above analysis demonstrates the benefit of using the learned metric over Hamming distance to choose the best stimulation pattern. However, the collection of available electrical stimulation patterns might change over time due to hardware or biophysical constraints. To assess the improvement in such cases, we next ask how well the learned metric performs relative to Hamming distance if we choose the kth best current pattern using each metric. (Figure 5D). Increasing k for the learned metric leads to higher MSE in terms of the decoded stimulus. Importantly, the learned metric achieves systematically lower MSE than the Hamming distance across the nearest k ≤ 10 stimulation patterns. These results indicate that the learned metric systematically selects better electrical stimulation patterns for eliciting reasonably close firing patterns.
4 DISCUSSION
The learned metric approach has two major potential implications for visual neuroscience. First, it provides a novel method to find “symbols” in the neural code of the retina that are similar in the sense that they indicate the presence of similar stimuli (Ganmor et al., 2015). Second, it has an application to retinal prosthesis technology, in which hardware constraints demand that the set of neural responses that can be generated with a device be used to effectively transmit useful visual information. For this application, a metric on responses that reflects visual stimulus similarity could be extremely useful.
The present approach differs from previously proposed spike train metrics (reviewed in (Victor, 2005)). Previous approaches have employed unsupervised techniques to cluster nearby spike patterns (Ganmor et al., 2015; Prentice et al., 2016; Gardella et al., 2017) or employed user-specified, paramteric approaches (Victor & Purpura, 1997; Aronov et al., 2003). In the case of single snapshots in time used here, the latter approach (Victor-Purpura metric) has only one degree of freedom which is a user-specified cost associated with moving spikes from one cell to another. In our proposed method, the relative importance of cell identity is learned directly from the statistics of population firing patterns.
The present work is a stepping stone towards building an encoding algorithm for retinal prostheses. In this paper, we learn the metric using light evoked responses. However, we need to estimate this metric in a blind retina, which has no light evoked responses. The convolutional metric is adaptable to any RGC population by merely noting cell types and center locations. Thus a convolutional metric could be trained on multiple healthy retinas and applied to a blind retina. Preliminary results in this direction indicate that a convolutional metric trained on half of the cells in a retinal recording (training data) generalizes to the other half (validation data), yielding performance higher than a quadratic metric (and comparable to a convolutional metric) trained directly on the validation data.
Additional techniques may also be helpful in extending our method to data involving many cells, temporal responses, and additional response structure. For example, using recurrent neural networks (Lipton et al., 2015) to embed responses may help compute distances between spiking patterns consisting of multiple time bins, perhaps of unequal length. Boosting (Freund & Schapire, 1999) may help combine multiple efficiently learned metrics for a smaller, spatially localized groups of cells. Other metrics may be developed to capture invariances learned by commonly used encoding models (Chichilnisky, 2001; Pillow et al., 2008). Also, triplet mining techniques (i.e., choosing hard negatives), a commonly used trick in computer vision, may improve efficiency (Schroff et al., 2015; Oh Song et al., 2016). Novel metrics could also be learned with additional structure in population responses, such as the highly structured correlated activity in RGCs Mastronarde (1983); Greschner et al. (2011). This noise correlation structure may be learnable using negative examples that destroy the noise correlations in data while preserving light response properties, by taking responses of different cells from different repeats of the stimulus.
Note that the convolutional metric outperforms the quadratic metric at both global (ROC curves) and local (precision recall curves) scales. However, using current retinal prosthesis technologies, we might be able to resolve information only up to a particular scale. For current retinal prostheses,
capturing global structure may be of greatest importance, because state-of-the-art technology has a relatively coarse vocabulary for stimulating RGCs (Humayun et al., 2012; Zrenner et al., 2011) (see also Figure 1). Specifically, the “nearest” elicited firing pattern is “far” in terms of the corresponding visual stimulus (Figure 5) . In terms of the proposed learned metric, the nearest feasible firing pattern achievable by electrical stimulation in our experiments is at the 10th percentile of all possible firing patterns. In this context, the average closest stimulation pattern, expressed as a percentile of the learned metric distances, provides a valuable benchmark to measure the performance of a prosthesis and how that performance is affected by advances in the underlying hardware and software.
ACKNOWLEDGEMENTS
We thank Vineet Gupta for numerous helpful discussions. We thank Pawel Hottowy, Alexander Sher, Alan M. Litke, Alexandra Tikidji-Hamburyan, Georges Goetz, Nora Brackbill, Colleen Rhoades and Lauren Grosberg for help with experiments. Research funding provided by Internship program at Google Brain (NPS), DARPA Contract FA8650-16-1-765 (EJC).
A APPENDIX
A.1 DETAILS OF THE CONVOLUTIONAL METRIC
We build a hierarchical, convolutional network to mirror the translation invariance expected in the receptive field organization of the retina. The goal of this network is to flexibly capture population activity of ON and OFF cells but employ minimal knowledge about cell receptive fields. The reason for this approach is to build a model that may be amenable to a retinal prosthetic in which the characterization of individual retinal ganglion cells is limited (Jepson et al., 2014a;b).
In particular, the network employs knowledge of the receptive field locations and firing rates of individual cells but the network is independent of the number of cells in the retina. The latter point is achieved by embedding the responses of neurons into pathways grouped by cell type. In our experiments, we focus on 2 cell types (ON and OFF parasols), thus we employ a 2 channel pathway (Kandel et al., 2000).
The network receives as input the spiking activity of ON and OFF parasols and embeds these spike patterns as one-hot vectors placed at the spatial locations of each cell’s receptive field. The resulting pattern of activations is summed across all cells in the ON and OFF populations, respectively, and passed through several convolutional layers of a network. Successive layers shrink the spatial activation size of the representation, while increasing the number of filter channels (Krizhevsky et al., 2012; Simonyan & Zisserman, 2014). The final embedding response vector has 1/16th number of pixels in the stimulus and represents the flattened representation of the last layer of the network.
Let c denote the number of different cells. The RGC population response is a vector r ∈ {0, 1}c.
• Represent responses as vectors over {+1,−1} with r̃ = 2(r− 0.5). • Compute the scale for each cell as a function of the mean firing rate:
si = a0µ 3 i + a1µ 2 i + a2µ 3 i + a3 .
• Map each cell to its center location on a grid with spatial dimensions same as those of visual stimulus. Let Mi be grid embedding on cell i. So, Mi has zero for all positions except center of cell. • Perform a separable 5× 5 convolution of stride 1 on each Mi to get RF estimate of cell, M̃i. • Add the activation of cells of the same type to get the total activation for a given cell type.
Hence, activation map for each cell type Ai = ∑ i r̃isiM̃i. Subsequent layers receive input
as a two layered activation map corresponding to ON and OFF parasol cells. • The convolutional layers further combine information accross multiple cells, of different
types. The details of different layers are shown in Figure 6 and Table 1.
A.2 ACCURACY OF THE LINEAR DECODER
For the latter analyses assesing the quality of metric, we reconstruct the stimulus from neural responses with linear decoding. In this section we demonstrate that even though the linear decoder is rather simplistic, the reconstructions are on-par with a non-parametric decoding method which averages the stimulus corresponding to the response pattern. In Figure 7 A, we see that the linear decoder has very similar spatial structure to the non-parametric decoder. To quantify this, we compute the mean-squared error between the two methods of decoding, normalized by the magnitude of non-parametric decoder (Figure 7 B, blue dots). The error of linear decoding is comparable to error between two non-parametric decodings computed using independent samples of stimuli (Figure 7 B, green dots). These observations show that linear decoder is a reasonable first-order approximation of encoded stimulus. | 1. What is the main contribution of the paper on learning a metric between neural responses?
2. What are the potential applications of the proposed method in neural prosthesis and understanding neural representations?
3. Do you have any concerns regarding the motivation and neurobiological perspective of the proposed framework?
4. How convincing are the results presented in the paper, particularly in comparison to other methods such as Hamming distance?
5. Is the approach proposed in the paper worthwhile pursuing further? | Review | Review
In their paper, the authors propose to learn a metric between neural responses by either optimizing a quadratic form or a deep neural network. The pseudometric is optimized by positing that the distance between two neural responses to two repeats of the same stimulus should be smaller than the distance between responses to different stimuli. They do so with the application of improving neural prosthesis in mind.
First of all, I am doubtful about this application: I don't think the task of neural prosthesis can ever be to produce idential output pattern to the same stimuli. Nevertheless, a good metric for neural responses that goes beyond e.g. hamming distance or squared error between spike density function would be clearly useful for understanding neural representations.
Second, I find the framework proposed by the authors interesting, but not clearly motivated from a neurobiological perspective, as the similarity between stimuli does not appear to play a role in the optimized loss function. For two similar stimuli, natural responses of neural population can be more similar than the responses to two repetitions of the same stimulus.
Third, the results presented by the authors are not convincing throughout. For example, 4B suggests that indeed the Hamming distance achieves lower error than the learned representation.
Nevertheless, it is an interesting approach that is worthwhile pursuing further. |
ICLR | Title
Learning a neural response metric for retinal prosthesis
Abstract
Retinal prostheses for treating incurable blindness are designed to electrically stimulate surviving retinal neurons, causing them to send artificial visual signals to the brain. However, electrical stimulation generally cannot precisely reproduce typical patterns of neural activity in the retina. Therefore, an electrical stimulus must be selected so as to produce a neural response as close as possible to the desired response. This requires a technique for computing the distance between a desired response and an achievable response that is meaningful in terms of the visual signal being conveyed. We propose a method to learn a metric on neural responses directly from recorded light responses of a population of retinal ganglion cells (RGCs) in the primate retina. The learned metric produces a measure of similarity of RGC population responses that accurately reflects the similarity of visual inputs. Using data from electrical stimulation experiments, we demonstrate that the learned metric could produce improvements in the performance of a retinal prosthesis.
1 INTRODUCTION
An important application of neuroscience research is the development of electronic devices to replace the function of diseased or damaged neural circuits (Wilson et al., 1991; Schwartz, 2004). Artificial vision has been a particularly challenging modality due to the richness of visual information, its diverse uses in perception and behavior, and the complexity of fabricating a device that can interface effectively with neural circuitry (Stingl et al., 2013; Wilke et al., 2011; Jepson et al., 2014a).
The most advanced example is a retinal prosthesis: a device that replaces the function of neural circuitry in the retina lost to degenerative disease. Most of the computational work related to this application has focused on building encoding models that use the visual image to accurately predict the spiking activity of populations of retinal ganglion cells (RGCs), the output neurons of the retina that convey visual information to the brain. Leading models include linear models (Chichilnisky, 2001), probabilistic point-process models (Pillow et al., 2008) and recently proposed models employing rich nonlinearities (McIntosh et al.; Batty et al.; Shah et al., 2017).
However, an accurate encoding model, although valuable, is insufficient. Any retinal prosthesis – whether based on electrical stimulation (Sekirnjak et al., 2008) or optical stimulation (Boyden et al., 2005; Bernstein et al., 2008) – is limited in its ability to create arbitrary desired patterns of neural activity, due to inefficiencies or lack of specificity in the stimulation modality (Barrett et al., 2014; Jepson et al., 2014a). Thus, a given stimulation system can only achieve a limited vocabulary of elicited spike patterns. Although a powerful and accurate encoding model might indicate that a particular spike pattern would be the natural biological response to the incident visual stimulus, the desired spike pattern might not reside within the feasible set of the stimulation device (Figure 1).
Previous studies (Jepson et al., 2014b) have addressed this problem by selecting the electrical stimulation which minimizes the number of unmatched spikes across cells – equivalent to the Hamming distance between two binary vectors. Even though a Hamming distance is easy to compute, this solution is not necessarily optimal. The goal of a prosthetics device should be to instead select an
electrical stimulation pattern that produces a response as close as possible to the desired pattern of activity in terms of the elicited visual sensation (Figure 1C). In lieu of measuring the visual sensation produced by a prosthetic, we instead posit that one may infer a distance metric based on the signal and noise properties of individual and populations of neurons (Shlens et al., 2009; Pillow et al., 2008; Field & Chichilnisky, 2007). In contrast, previous approaches to spike metrics have focused on user-specified, parameteric functions (Victor & Purpura, 1996; 1997; Victor, 2005) or unsupervised techniques to cluster nearby spike patterns (van Rossum, 2001; Dubbs et al., 2010; Ganmor et al., 2015).
In this work, we propose a neural response metric learned directly from the statistics and structure of firing patterns in neural populations, with the aim of using it to select optimal electrical stimulation patterns in a prosthesis device. In particular, we learn a neural response metric by applying ideas from metric learning to recordings of RGC populations in non-human primate retina. We demonstrate that the learned metric provides an intuitive, meaningful representation of the similarity between spike patterns in the RGC population, capturing the statistics of neural responses as well as similarity between visual images. Finally, we use this metric to select the optimal electrical stimulation pattern within the constraints of the electrical interface to a population of RGCs.
2 METRIC AND SIMILARITY LEARNING
In this section we describe the algorithmic framework for learning pseudometrics or similarity measures in neural response space. We start by introducing notations and conventions that we use throughout the paper. We use bold face letters to denote vectors and upper case letters to denote matrices. We denote the symmetrization operator of a square matrix M by sym(M) = 12 (M +M >).
A single frame of visual stimulus, s, is an image represented as an n × n matrix. The space of possible stimuli is S ⊂ Rn×n. A sequence sl, . . . , sm of m− l+1 frames, where sj ∈ S , is denoted as sl:m. In order to simplify our notation, we define the responses of the cells to be a p dimensional vector and the space of possible responses as r ⊆ Rp. Analogously, a sequence of cell activities rt for t = l, . . . ,m is denoted rl:m. To simplify the presentation below, we confine the visual stimulus to be a single image and the corresponding response of each cell to be a scalar.
2.1 RELATED WORK
Metric and similarity learning using constrative loss (Chopra et al., 2005; Hadsell et al., 2006) and triplets loss (Shalev-Shwartz et al., 2004; Weinberger & Saul, 2009) have been used extensively in several domains. In computer vision, these methods achieve state-of-the-art performance on face recognition (Schroff et al., 2015; Sun et al., 2014; Taigman et al., 2014) and image retrieval (Wang et al., 2014). A central theme of this work has focused on improving metric learning by mining semi-hard negatives (Schroff et al., 2015). Because many negatives provide minimal information, these methods use a partially learned metric to identify negatives that may maximally improve the quality of the metric given a fixed number of updates. To avoid the computational burden imposed by such methods, some works have proposed alternative loss functions which either make efficient use of all the negatives in a batch (Oh Song et al., 2016) or multiple classes using n-tuplets (Sohn, 2016). Our method is similar to these methods as we make efficient use of all the negatives in a batch as in (Oh Song et al., 2016) but also use a simplified, softmax-based loss function (Sohn, 2016).
2.2 EMPIRICAL LOSS MINIMIZATION
Given the population response space R, we learn a function h : R × R → R which captures invariances in the spiking response when the same stimulus is presented multiple times. The scoring function h is viewed either as a similarity function or a pseudometric. To distinguish between the two cases, we use d(·, ·) to denote a pseudometric. A pseudometric d needs to satisfy:
Positivity. d(r1, r2) ≥ 0 and d(r, r) = 0 Sub-additivity. d(r1, r2) + d(r2, r3) ≥ d(r1, r3) Symmetry. d(r1, r2) = d(r2, r1)
During the experiments, repeats of the same sequence of visual stimuli are presented. The responses collected during the ith presentation (repeat) of visual stimulus are denoted by (sit, r i t). Here s i t is the stimulus history which elicits population response rit at time t. The goal of this approach is to learn a metric such that pairs of responses generated during different repeats of the same stimulus are closer, or more similar, than pairs of responses generated by different stimuli. We slice the data into triplets of the form (r, r+, r−) where r and r+ are responses of cells to the same stimulus while r− is the response to a different visual stimuli (Figure 2A). We refer to (r, r+) as a positive pair and (r, r−) as a negative pair (Figure 2B).
A common method to improve the learned metrics is to choose difficult negatives as described above. As it can be computationally demanding to mine hard negatives, we found that a much simpler
strategy of randomly sampling a common set of negatives for all the positive examples in the batch is effective. Hence we first sample positive pairs of responses corresponding to random stimulus times and a common set of negative responses generated by stimuli distinct from any stimulus for positive responses. Hence a batch of triplets is denoted by T = {{ri, ri+}, {rj−}}.
Given a training set of triplets T , the goal is to find a pseudometric such that for most (ri, ri+, {rj−}) ∈ T the distance between responses of two repeats of same stimulus is smaller than their distance to any of the irrelevant response vectors,
d(ri, ri+) < min j d(ri, rj−) (1)
We cast the learning task as empirical risk minimization of the form,
1 |T | ∑
(ri,ri+,{r j −})∈T
`(ri, ri+, {rj−}) ,
where `() is a differential, typically convex, relaxation of the ordering constraints from (1). We use the following,
`(ri, ri+, {rj−}) = β log 1 +∑ j e d(ri,ri+)−d(r i,r j −) β , as the surrogate loss. We set β = 10 in our implementation.
In the case of similarity learning, we swap the role of the pairs and define,
`(ri, ri+, {rj−}) = β log 1 +∑ j e h(ri,r j −)−h(r i,ri+) β , We implemented two parametric forms for distance and similarity functions. The first is a quadratic form where A 0 and
hA(r1, r2) = r > 1A r2 and dA(r1, r2) = (r1 − r2)>A (r1 − r2) . (2)
We learn the parameters by minimizing the loss using Adagrad (Duchi et al., 2011). We project A onto the space of positive semi-definite matrices space after every update using singular value decomposition. Concretely, we rewrite A as, UDU> where U is a unitary matrix and D is a diagonal matrix. We then threshold the diagonal elements of D to be non-negative.
2.3 EXTENDING METRIC SPACES FOR UNOBSERVED NEURAL POPULATIONS
The quadratic metric provides a good demonstration of the hypothesis that a learned metric space may be suitable. However, a quadratic metric is not feasible for a real prosthetic device because such a metric must be trained on visually-evoked spiking activity of a neural population. In a retinal prosthetic, such data are not available because the retina does not respond to light. Furthermore, a quadratic model contains limited modeling capacity to capture nonlinear visual processing in the retina (Field & Chichilnisky, 2007).
To address these issues, we introduce a nonlinear embedding based on a convolutional neural network (CNN). The CNN encodes each cell’s spiking responses in an embedding space grouped by cell type and cell body location before performing a series of nonlinear operations to map the response embedding from the response space R to Rp. One benefit of this approach is that this model has an embedding dimensionality independent of the number of cells recorded while only employing knowledge of the cell body location and cell type. The cell body location and cell type are identifiable from recordings of non-visually-evoked (spontaneous) neural activity in the retina (Li et al., 2015; Richard et al., 2015).
The resulting response metric may be generalized to blind retinas by merely providing cell center and cell type information. That is, no visually-evoked spiking activity is necessary to train an embedding for a new retina. Even though the model may be fit on non visually-evoked spiking activity, this
model class is superior then the quadratic model when fit to a given retina. We discuss preliminary experiments for predicting the activity in unobserved retinas in the Discussion.
We reserve a complete discussion of model architecture and training procedure for the Appendix. In brief, we employ a hierarchical, convolutional network topology to mirror the translation invariance expected in the receptive field organization of the retina. The convolutional network consists of 595K parameters across 7 layers and employs batch normalization to accelerate training. Let φ(r) be the convolutional embedding of responses. The similarity and metric learned using the convolutional network is given as -
hφ(r1, r2) = φ(r1) · φ(r2) and dφ(r1, r2) = ‖φ(r1)− φ(r2)‖2 . (3)
We learn the parameters by minimizing the loss using Adam (Kingma & Ba, 2014).
3 RESULTS
3.1 EXPERIMENTAL SETUP
Spiking responses from hundreds of retinal ganglion cells (RGCs) in primate retina were recorded using a 512 electrode array system (Litke et al., 2004; Frechette et al., 2005). ON and OFF parasol RGC types were identified using visual stimulation with binary white noise and reverse correlation (Chichilnisky, 2001).
Since each analysis requires different stimulus conditions and numbers of cells, we leave the details of each preparation to the subsequent sections. For each analysis, spike trains were discretized at the 120 Hz frame rate of the display (bins of 8.33ms), and responses across all the cells for 1 time bin were used to generate each training example.
In the following sections, we quantitatively assess the quality of multiple learned metrics – each metric with increasing complexity – with respect to a baseline (Hamming distance). First, we assess the quality of the learned metrics with respect to traditional error analysis. Second, we assess the quality of the learned embeddings with respect to optimal decoding of the stimulus. Finally, we demonstrate the utility of the learned metric by employing the metric in a real electrical stimulation experiment.
3.2 QUANTITATIVE EVALUATION OF LEARNED METRIC SPACE.
The quality of a metric in our context can be measured by its effectiveness for determining whether a pair of firing patterns arose from the same visual stimulus or from distinct visual stimuli. To evaluate the metric at the scale of large RGC populations, we focus our analysis on responses of a collection of 36 OFF parasol cells and 30 ON parasol cells to 99 repeats of a 10 second long white noise stimulus clip. The responses were partitioned into training (first 8 seconds) and testing (last 2 seconds) of each trial.
We assessed a range of learned embedding models and baselines by employing receiver-operating characteristic (ROC) analysis. Specifically, we selected the population firing pattern, r, at a particular offset in time in the experiment (corresponding to a visual stimulus history) and compared this firing pattern to two other types of firing patterns: (1) the firing pattern from the same group of cells at the same time during a second repeated presentation of the stimulus, r+; and (2) the firing pattern at a distinct, randomly selected time point, r−. For a given threshold, if the metric results in a correct classification of r+ as the same stimulus, we termed the result a true positive. For the same threshold, if an embedding metric incorrectly classified r− as the same stimulus, we termed it a false positive. Note that unlike training, we do not choose a common set of negatives for testing.
Figure 3A traces out the trade-off between the false positive rate and true positive rate across a range of thresholds in an assortment of embedding models for neural population activity. Better models trace out curves that bend to the upper-left of the figure. The line of equality indicates a model that is performing at chance. A simple baseline model of a Hamming distance (red curve) performs least accurately. A quadratic metric which permits variable weight for each neuron and interaction between pairs of neurons improves the performance further (green curve). Finally, replacing a quadratic metric with a euclidean distance between embedding of responses using a convolutional neural network improves the performance further (blue curve).
The ROC analysis provides strong evidence that increasingly sophisticated embedding models learn global structure above and beyond a Hamming distance metric. We also examined how the local structure of the space is captured by the embedding metric by calculating the learned embeddings on a test dataset consisting of 99 repeats each of the 10 different visual stimuli. We randomly selected a firing pattern r from one presentation of the stimulus, and identified k nearest neighbors according to our metric, for increasing k. Among the k nearest neighbors, we assessed precision, i.e. what fraction of the nearest neighbors correspond to 98 other presentations of the same stimulus. A perfect learned embedding model would achieve a precision of 1 for k ≤ 98 and 98/k otherwise (Figure 3B, dashed). We also measured recall, i.e. what fraction of the remaining 98 presentations of the same stimulus are within the k nearest neighbors. A perfect learned embedding model would achieve recall of k/98 for k ≤ 98 and 1 otherwise (Figure 3B, dashed). Figure 3B highlights the performance of various learned methods across increasing k. The results indicate that the precision and recall are below an optimal embedding, but the convolutional metric performs better than quadratic and Hamming metrics.
To visualize the discriminability of the response metric, we embed the 99 responses to 10 distinct stimuli using t-SNE (Maaten & Hinton, 2008) with distances estimated using the convolutional metric. We see in Figure 3C that responses corresponding to same visual stimulus (same color) cluster in the same region of embedding space reflecting the ability of the response space metric to discriminate distinct stimuli.
3.3 LEARNED METRIC CAPTURES STIMULUS SIMILARITY.
Although we trained the metric only based on whether pairs of responses are generated by the same stimulus, Figure 3C suggests that the learned response metric provides additional discriminative stimulus information. In the following sections, we attempt to quantitatively measure how well the response metric captures stimulus information by performing stimulus reconstruction. Our hypothesis is that stimulus reconstruction provides a proxy for the ultimate goal of assessing perceptual similarity.
Stimulus reconstruction has a rich history in the neural coding literature and presents significant technical challenges. To simplify the problem, we focus on linear reconstruction (Bialek et al., 1991; Rieke et al.; Roddey & Jacobs, 1996) because the objective is clear, the problem is convex and the resulting reconstruction is information rich (Stanley et al., 1999; Berry et al., 1997). One limitation of this approach is that linear reconstruction does not capture rich nonlinearities potentially present in encoding. For this reason, we focus subsequent analysis on the quadratic and Hamming metrics
and reserve the analysis of the nonlinear embedding for future work with nonlinear reconstruction techniques (see Discussion).
A technical issue that arises in the context of metric space analysis is the infeasibility of computing the embeddings for all spike patterns across large numbers of cells (e.g. 66 cells in the data of Figure 3 produces 266 responses). Therefore we focus on a spatially localized and overlapping population of 13 RGCs (6 ON and 7 OFF parasol cells in Figure 1B) because we can explicitly list all the 213 possible response patterns. Training data was accrued from RGC responses to 5 repeated presentations of a 5 minute long white noise sequence. The first 4 minutes of each presentation was employed for training; the last minute was employed for testing.
We examined the similarity between the decoded stimulus and the target stimulus, for responses that, according to our learned quadratic metric, are increasingly distant from the target. Figure 4A (first column, third row) shows the spatial profile of the linearly decoded target response 1.
We next calculate the distance of this target firing pattern to all 213 firing patterns and rank order them based on the learned metric. Figure 4A, top rows, shows firing patterns at the 1%, 2.5%, 5% and 75% percentiles. Below these firing patterns are the associated with linearly decoded stimuli, and the errors with respect to the target firing pattern. As we choose patterns farther from the target in terms of our metric, the distance between the decoded stimulus for the chosen firing pattern and target firing pattern systematically increases.
We quantify this observation in Figure 4B by randomly selecting pairs of responses from the test data and calculating the optimal linearly decoded stimuli associated with them (see Methods). We then plot the mean squared error (MSE) between the linearly decoded stimuli against the normalized metric distance between the responses. The decoding error systematically increases as the metric distance between the corresponding responses increases, for both the learned quadratic metric (blue) as well the Hamming distance (green). However, the distances generated by Hamming distance are
1Note that the reconstruction is based on the static population response pattern. We remove the time dimension by approximating ON and OFF parasol cell responses with a temporal filter with identical (but with oppositely signed) filters. Subsequent analyses are performed by only decoding the temporally filtered stimulus. The temporally filtered stimulus s is decoded as s = Ar+ b , where parameters A, b are estimated from RGC recordings.
discrete and therefore provide a less granular representation of the decoding errors associated with the stimuli.
3.4 LEARNED RESPONSE METRIC MAY IMPROVE PERFORMANCE OF RETINAL PROSTHESIS.
Using recorded experimental data, we now show how response metrics could improve the function of retinal prostheses by selecting optimal electrical stimulation patterns. For a given target response, we use the learned quadratic metric to select the best electrical stimulation pattern, and evaluate the effectiveness of this approach by linearly decoding the stimulus from the elicited responses.
Calibration of RGC responses to electrical stimulation patterns was performed by repeating a given electrical stimulation pattern 25 times, at each of 40 current levels, in a random order. Due to the limited duration of experiments, we focused on stimulation patterns in which only one electrode was active. The data was spike sorted and spiking probability was computed for each cell by averaging across trials for each electrical stimulation pattern (Mena et al., 2017). For each cell and electrode, the probability of firing as a function of stimulation current was approximated with a sigmoid function.
Since the RGC response to electrical stimulation is probabilistic, we evaluate each stimulation pattern by the expected distance between the elicited responses and the target firing pattern. For a quadratic response metric this can be easily computed in closed form. Given a response metric, we rank different stimulation patterns based on the expected distance to the target firing pattern. In Figure 5A and B (first columns) we show example target response patterns and the corresponding linearly decoded visual stimulus. We then analyze the best stimulation pattern determined by the learned quadratic metric, and by the Hamming distance. The responses sampled from the response distributions for the selected stimulation patterns are shown in Figures 5A and B (second and third columns each). We find that the linearly decoded stimuli were closer to the target when the stimulation was chosen via the learned response metric compared to the Hamming distance.
To quantify this behavior, we calculated the mean squared error between the decoded stimuli when the stimulation was chosen using the learned metric and the Hamming distance (Figure 5C). The
learned metric and Hamming metric identify the same stimulation pattern and hence achieve the same error for 49% for the target responses observed. However, on 33% of the target responses, the learned metric achieves lower mean squared error than the Hamming distance; conversely, the learned metric has larger MSE then Hamming distance on 18% of the target responses.
The above analysis demonstrates the benefit of using the learned metric over Hamming distance to choose the best stimulation pattern. However, the collection of available electrical stimulation patterns might change over time due to hardware or biophysical constraints. To assess the improvement in such cases, we next ask how well the learned metric performs relative to Hamming distance if we choose the kth best current pattern using each metric. (Figure 5D). Increasing k for the learned metric leads to higher MSE in terms of the decoded stimulus. Importantly, the learned metric achieves systematically lower MSE than the Hamming distance across the nearest k ≤ 10 stimulation patterns. These results indicate that the learned metric systematically selects better electrical stimulation patterns for eliciting reasonably close firing patterns.
4 DISCUSSION
The learned metric approach has two major potential implications for visual neuroscience. First, it provides a novel method to find “symbols” in the neural code of the retina that are similar in the sense that they indicate the presence of similar stimuli (Ganmor et al., 2015). Second, it has an application to retinal prosthesis technology, in which hardware constraints demand that the set of neural responses that can be generated with a device be used to effectively transmit useful visual information. For this application, a metric on responses that reflects visual stimulus similarity could be extremely useful.
The present approach differs from previously proposed spike train metrics (reviewed in (Victor, 2005)). Previous approaches have employed unsupervised techniques to cluster nearby spike patterns (Ganmor et al., 2015; Prentice et al., 2016; Gardella et al., 2017) or employed user-specified, paramteric approaches (Victor & Purpura, 1997; Aronov et al., 2003). In the case of single snapshots in time used here, the latter approach (Victor-Purpura metric) has only one degree of freedom which is a user-specified cost associated with moving spikes from one cell to another. In our proposed method, the relative importance of cell identity is learned directly from the statistics of population firing patterns.
The present work is a stepping stone towards building an encoding algorithm for retinal prostheses. In this paper, we learn the metric using light evoked responses. However, we need to estimate this metric in a blind retina, which has no light evoked responses. The convolutional metric is adaptable to any RGC population by merely noting cell types and center locations. Thus a convolutional metric could be trained on multiple healthy retinas and applied to a blind retina. Preliminary results in this direction indicate that a convolutional metric trained on half of the cells in a retinal recording (training data) generalizes to the other half (validation data), yielding performance higher than a quadratic metric (and comparable to a convolutional metric) trained directly on the validation data.
Additional techniques may also be helpful in extending our method to data involving many cells, temporal responses, and additional response structure. For example, using recurrent neural networks (Lipton et al., 2015) to embed responses may help compute distances between spiking patterns consisting of multiple time bins, perhaps of unequal length. Boosting (Freund & Schapire, 1999) may help combine multiple efficiently learned metrics for a smaller, spatially localized groups of cells. Other metrics may be developed to capture invariances learned by commonly used encoding models (Chichilnisky, 2001; Pillow et al., 2008). Also, triplet mining techniques (i.e., choosing hard negatives), a commonly used trick in computer vision, may improve efficiency (Schroff et al., 2015; Oh Song et al., 2016). Novel metrics could also be learned with additional structure in population responses, such as the highly structured correlated activity in RGCs Mastronarde (1983); Greschner et al. (2011). This noise correlation structure may be learnable using negative examples that destroy the noise correlations in data while preserving light response properties, by taking responses of different cells from different repeats of the stimulus.
Note that the convolutional metric outperforms the quadratic metric at both global (ROC curves) and local (precision recall curves) scales. However, using current retinal prosthesis technologies, we might be able to resolve information only up to a particular scale. For current retinal prostheses,
capturing global structure may be of greatest importance, because state-of-the-art technology has a relatively coarse vocabulary for stimulating RGCs (Humayun et al., 2012; Zrenner et al., 2011) (see also Figure 1). Specifically, the “nearest” elicited firing pattern is “far” in terms of the corresponding visual stimulus (Figure 5) . In terms of the proposed learned metric, the nearest feasible firing pattern achievable by electrical stimulation in our experiments is at the 10th percentile of all possible firing patterns. In this context, the average closest stimulation pattern, expressed as a percentile of the learned metric distances, provides a valuable benchmark to measure the performance of a prosthesis and how that performance is affected by advances in the underlying hardware and software.
ACKNOWLEDGEMENTS
We thank Vineet Gupta for numerous helpful discussions. We thank Pawel Hottowy, Alexander Sher, Alan M. Litke, Alexandra Tikidji-Hamburyan, Georges Goetz, Nora Brackbill, Colleen Rhoades and Lauren Grosberg for help with experiments. Research funding provided by Internship program at Google Brain (NPS), DARPA Contract FA8650-16-1-765 (EJC).
A APPENDIX
A.1 DETAILS OF THE CONVOLUTIONAL METRIC
We build a hierarchical, convolutional network to mirror the translation invariance expected in the receptive field organization of the retina. The goal of this network is to flexibly capture population activity of ON and OFF cells but employ minimal knowledge about cell receptive fields. The reason for this approach is to build a model that may be amenable to a retinal prosthetic in which the characterization of individual retinal ganglion cells is limited (Jepson et al., 2014a;b).
In particular, the network employs knowledge of the receptive field locations and firing rates of individual cells but the network is independent of the number of cells in the retina. The latter point is achieved by embedding the responses of neurons into pathways grouped by cell type. In our experiments, we focus on 2 cell types (ON and OFF parasols), thus we employ a 2 channel pathway (Kandel et al., 2000).
The network receives as input the spiking activity of ON and OFF parasols and embeds these spike patterns as one-hot vectors placed at the spatial locations of each cell’s receptive field. The resulting pattern of activations is summed across all cells in the ON and OFF populations, respectively, and passed through several convolutional layers of a network. Successive layers shrink the spatial activation size of the representation, while increasing the number of filter channels (Krizhevsky et al., 2012; Simonyan & Zisserman, 2014). The final embedding response vector has 1/16th number of pixels in the stimulus and represents the flattened representation of the last layer of the network.
Let c denote the number of different cells. The RGC population response is a vector r ∈ {0, 1}c.
• Represent responses as vectors over {+1,−1} with r̃ = 2(r− 0.5). • Compute the scale for each cell as a function of the mean firing rate:
si = a0µ 3 i + a1µ 2 i + a2µ 3 i + a3 .
• Map each cell to its center location on a grid with spatial dimensions same as those of visual stimulus. Let Mi be grid embedding on cell i. So, Mi has zero for all positions except center of cell. • Perform a separable 5× 5 convolution of stride 1 on each Mi to get RF estimate of cell, M̃i. • Add the activation of cells of the same type to get the total activation for a given cell type.
Hence, activation map for each cell type Ai = ∑ i r̃isiM̃i. Subsequent layers receive input
as a two layered activation map corresponding to ON and OFF parasol cells. • The convolutional layers further combine information accross multiple cells, of different
types. The details of different layers are shown in Figure 6 and Table 1.
A.2 ACCURACY OF THE LINEAR DECODER
For the latter analyses assesing the quality of metric, we reconstruct the stimulus from neural responses with linear decoding. In this section we demonstrate that even though the linear decoder is rather simplistic, the reconstructions are on-par with a non-parametric decoding method which averages the stimulus corresponding to the response pattern. In Figure 7 A, we see that the linear decoder has very similar spatial structure to the non-parametric decoder. To quantify this, we compute the mean-squared error between the two methods of decoding, normalized by the magnitude of non-parametric decoder (Figure 7 B, blue dots). The error of linear decoding is comparable to error between two non-parametric decodings computed using independent samples of stimuli (Figure 7 B, green dots). These observations show that linear decoder is a reasonable first-order approximation of encoded stimulus. | 1. What is the main contribution of the paper regarding optimizing metrics in retinal prosthetics?
2. What are the strengths and weaknesses of the proposed approach compared to previous work in metric learning?
3. How does the reviewer assess the significance and impact of the paper on improving retinal prosthetics?
4. What are some minor comments and suggestions for improving the clarity and presentation of the paper? | Review | Review
* Summary of paper: The paper addresses the problem of optimizing metrics in the context of retinal prosthetics: Their goal is to learn a metric which assumes spike-patterns generated by the same stimulus to be more similar to each other than spike-patterns generated by different stimuli. They compare a conventional, quadratic metric to a neural-network based representation and a simple Hamming metric, and show that the neural-network based on achieves higher performance, but that the quadratic metric does not substantially beat the simple Hamming baseline. They subsequently evaluate the metric (unfortunately, only the quadratic metric) in two interesting applications involving electrical stimulation, with the goal of selecting stimulations which elicit spike-patterns which are maximally similar to spike-patterns evoked by particular stimuli.
* Quality: Overall, the paper is of high quality. What puzzled me, however is the fact that, in the applications using electrical stimulation in the paper (i.e. the applications targeted to retinal prosthetics, Secs 3.3 and 3.4), the authors do not actually used the well-performing neural-network based metric, but rather the quadratic metric, which is no better than the baseline Hamming metric? It would be valuable for them to comment on what additional challenges would arise by using the neural network instead, and whether they think they could be surmonted.
* Clarity: The paper is overall clear, but specific aspects could be improved: First, it took me a while to understand (and is not entirely clear to me) what the goal of the paper is, in particular outside the setting studied by the authors (in which there is a small number of stimuli to be distinguished). Second, while the paper does not claim to provide a new metric-learning approach, it would benefit from more clearly explaining if and how their approach relates to previous approaches to metric learning. Third, the paper, in my view, overstating some of the implications. As an example, Figure 5 is titled 'Learned quadratic response metric gives better perception than using a Hamming metric.': there is no psychophysical evaluation of perception in the paper, and even the (probably hand-picked?) examples in the figure do not look amazing.
* Originality: To the best of my knowledge, this is the first paper addressing the question of learning similarity metrics in the context of retinal prosthetics. Therefore, this specific paper and approach is certainly novel and original. From a machine-learning perspective, however, this seems like pretty standard metric learning with neural networks, and no attempt is made to either distinguish or relate their approach to prior work in this field (e.g. Chopra et al 2005, Schroff et al 2015 or Oh Song et al 2016.)
In addition, there is a host of metrics and kernels which have been proposed for measuring similarity between spike trains (Victor-Purpura) -- while they might not have been developed in the context of prosthetics, they might still be relevant to this tasks, and it would have been useful to see a comparison of how well they do relative to a Hamming metric. The paper states this as a goal ("This measure should expand upon...), but then never does that- why not?
* Significance: The general question the authors are approaching (how to improve retinal prosthetics) is, an extremely important one both from a scientific and societal perspective. How important is the specific advance presented in this paper? The authors learn a metric for quantifying similarity between neural responses, and show that it performs better than a Hamming metric. It would be useful for the paper to comment on how they think that metric to be useful for retinal prosthetics. In a real prosthetic device, one will not be able learn a metric, as the metric learning her requires access to multiple trials of visual stimulation data, neuron-by-neuron. Clearly, any progress on the way to retinal prosthetics is important and this approach might contribute that. However, the current presentation of the manuscripts gives a somewhat misleading presentation of what has been achieved, and a more nuanced presentation would be important and appropriate.
Overall, this is a nice paper which could be of interest to ICLR. Its strengths are that i) they identified a novel, interesting and potentially impactful problem that has not been worked on in machine learning before, ii) they provide a solution to it based on metric learning, and show that it performs better than a non-learned metrics. Its limitations are that i) no novel machine-learning methodology is used (and relationship to prior work in machine learning is not clearly described) ii) comparisons with previously proposed similarity measures of spike trains are lacking, iii) the authors do not actually use their learned, network based metric, but the metric which performs no better than the baseline in their main results, and iv) it is not well explained how this improved metric could actually be used in the context of retinal prosthetics.
Minor comments:
- p.2 The authors write that the element-wise product is denoted by $A \bullet B = \Tr(A^{\intercal}) B$
This seems to be incorrect, as the r.h.s. corresponds to a scalar.
- p.3 What exactly is meant by “mining”?
- p.4 It would be useful to give an example of what is meant by “similarity learning”.
- p.4 “Please the Appendix” -> “Please see the Appendix”
- p.5 (Fig. 3) The abbreviation “AUC” is not defined.
- p.5 (Fig. 3B) The figure giving 'recall' should have a line indicating perfect performance, for comparison.
- Sec 3.3: How was the decoder obtained ?
- p.6 (Fig. 4) Would be useful to state that column below 0 is the target. Or just replace “0” by “target”.
- p.6 (3rd paragraph) The sentence “Figure 4A bottom left shows the spatial profile of the linear decoding 20ms prior to the target response.” is unclear. It took me a very long time to realize that "bottom left" meant "column 0, 'decoded stimulus'" row. It's also unclear why the authors chose to look at 20ms prior to the target response.
- p.6 The text says RMS distance, but the Fig. 4B caption says MSE— is this correct? |
ICLR | Title
KNIFE: Kernelized-Neural Differential Entropy Estimation
Abstract
Estimation of (differential) entropy and the related mutual information has been pursued with significant efforts by the machine learning community. To address shortcomings in previously proposed estimators for differential entropy, here we introduce KNIFE, a fully parameterized, differentiable kernel-based estimator of differential entropy. The flexibility of our approach also allows us to construct KNIFE-based estimators for conditional (on either discrete or continuous variables) differential entropy, as well as mutual information. We empirically validate our method on high-dimensional synthetic data and further apply it to guide the training of neural networks for real-world tasks. Our experiments on a large variety of tasks, including visual domain adaptation, textual fair classification, and textual fine-tuning demonstrate the effectiveness of KNIFE-based estimation.
1 INTRODUCTION
Learning tasks requires information (Principe et al., 2006) in the form of training data. Thus, information measures (Shannon, 1948) (e.g. entropy, conditional entropy and mutual information) have been a source of inspiration for the design of learning objectives in modern machine learning (ML) models (Linsker, 1989; Torkkola, 2006). Over the years, a plethora of estimators have been introduced to estimate the value of the aforementioned measures of information and they have been applied to many different problems, including information and coding theory, limiting distributions, model selection, design of experiment and optimal prior distribution, data disclosure, and relative importance of predictors (Ebrahimi et al., 2010). In these applications, traditional research focused on both developing new estimators and obtaining provable guarantees on the asymptotic behavior of these estimators (Liu et al., 2012; Verdú, 2019).
However, when used for training deep neural networks, additional requirements need to be satisfied. In particular, the estimator needs to be differentiable w.r.t. the data distribution (R1), computationally tractable (R2), and rapidly adapt to changes in the underlying distribution (R3). For instance, Mutual Information (MI), a fundamental measure of dependence between variables, only became a popular (standalone or regularizing) learning objective for DNNs once estimators satisfying the above requirements were proposed (Poole et al., 2019; Barber & Agakov, 2003). Although MI is notoriously difficult to estimate in high dimensions (Kraskov et al., 2004; Pichler et al., 2020; McAllester & Stratos, 2020), these estimators have demonstrated promising empirical results in unsupervised representation learning (Krause et al., 2010; Bridle et al., 1992; Hjelm et al., 2019; Tschannen et al., 2020), discrete/invariant representations (Hu et al., 2017; Ji et al., 2019), generative modelling (Chen et al., 2016; Zhao et al., 2017), textual disentangling (Cheng et al., 2020b; Colombo et al., 2021), and applications of the Information Bottleneck (IB) method (Mahabadi et al., 2021; Devlin et al., 2018; Alemi et al., 2016) among others. Compared to MI, Differential Entropy (DE) has received less attention from the ML community while also having interesting applications.
In this paper, we focus on the problem of DE estimation as this quantity naturally appears in many applications (e.g. reinforcement learning (Shyam et al., 2019; Hazan et al., 2019; Ahmed et al., 2019; Kim et al., 2019), IB (Alemi et al., 2016), mode collapse (Belghazi et al., 2018)). Traditional estimators of DE often violate at least one of the requirements (R1) – (R3) listed above (e.g. knearest neighbor based estimators violate (R1)). As a consequence, the absence of DE estimator for arbitrary data distributions forces deep learning researchers to either restrict themselves to special cases where closed-form expressions for DE are available (Shyam et al., 2019) or use MI as a proxy
(Belghazi et al., 2018). In this work, we introduce a Kernelized Neural dIFferential Entropy (KNIFE) estimator, that satisfies the aforementioned requirements and addresses limitations of existing DE estimators (Schraudolph, 2004; McAllester & Stratos, 2020). Stemming from recent theoretical insights (McAllester & Stratos, 2020) that justify the use of DE estimators as building blocks to better estimate MI, we further apply KNIFE to MI estimation. In the context of deep neural networks with high dimensional data (e.g. image, text), KNIFE achieves competitive empirical results in applications where DE or MI is required.
1.1 CONTRIBUTIONS
Our work advances methods in DE and MI estimation in several ways.
1. We showcase limitation of the existing DE estimators proposed in Schraudolph (2004); McAllester & Stratos (2020) with respect to desirable properties required for training deep neural networks. To address these shortcomings, we introduce KNIFE, a fully learnable kernel-based estimator of DE. The flexibility of KNIFE allows us to construct KNIFE-based estimators for conditional DE, conditioning on either a discrete or continuous random variable. 2. We prove learnability under natural conditions on the underlying probability distribution. By requiring a fixed Lipschitz condition and bounded support we are not only able to provide an asymptotic result, but also a confidence bound in the case of a finite training set. This extends the consistency result by Ahmad & Lin (1976). 3. We validate on synthetic datasets (including multi-modal, non-Gaussian distributions), that KNIFE addresses the identified limitations and outperforms existing methods on both DE and MI estimation. In particular, KNIFE more rapidly adapts to changes in the underlying data distribution. 4. We conduct extensive experiments on natural datasets (including text and images) to compare KNIFE-based MI estimators to most recent MI estimators. First, we apply KNIFE in the IB principle to fine-tune a pretrained language model. Using KNIFE, we leverage a closed-form expression of a part of the training objective and achieve the best scores among competing MI estimators. Second, on fair textual classification, the KNIFE-based MI estimator achieves near perfect disentanglement (with respect to the private, discrete label) at virtually no degradation of accuracy in the main task. Lastly, in the challenging scenario of visual domain adaptation, where both variables are continuous, KNIFE-based MI estimation also achieves superior results.
1.2 EXISTENT METHODS AND RELATED WORKS
DE estimation. Existing methods for estimating DE fit into one of three categories (Beirlant et al., 1997; Hlaváčková-Schindler et al., 2007; Verdú, 2019): plug-in estimates (Ahmad & Lin, 1976; Györfi & Van der Meulen, 1987), estimates based on sample-spacings (Tarasenko, 1968), and estimates based on nearest neighbor distances (Kozachenko & Leonenko, 1987; Tsybakov & Van der Meulen, 1996); (Berrett et al., 2019). Our proposed estimator falls into the first category and we will thus focus here on previous work using that methodology. Excellent summaries of all the available methods can be found in the works (Beirlant et al., 1997; Hlaváčková-Schindler et al., 2007; Wang et al., 2009; Verdú, 2019). In Ahmad & Lin (1976), a first nonparametric estimator of DE was suggested and theoretically analyzed. It builds on the idea of kernel density estimation using Parzen-Rosenblatt windowing (Rosenblatt, 1956; Parzen, 1962). More detailed analysis followed (Joe, 1989; Hall & Morton, 1993) but the estimator remained essentially unchanged. Unfortunately, this classical literature is mostly concerned with appropriate regularity conditions that guarantee asymptotic properties of estimators, such as (asymptotic) unbiasedness and consistency. Machine learning applications, however, usually deal with a fixed—often very limited—number of samples.
Differentiable DE estimation. A first estimator that employed a differential learning rule was introduced in Viola et al. (1996). Indeed, the estimator proposed therein is optimized using stochastic optimization, it only used a single kernel with a low number of parameters. An extension that uses a heteroscedastic kernel density estimate, i.e., using different kernels at different positions, has been proposed in Schraudolph (2004). Still the number of parameters was quite low and varying means in the kernels or variable weights were not considered. Although the estimation of DE remained a topic of major interest as illustrated by recent works focusing on special classes of distributions (Kolchinsky & Tracey, 2017; Chaubey & Vu, 2021) and nonparametric estimators (Sricharan et al., 2013; Kandasamy et al., 2015; Moon et al., 2021), the estimator introduced in Schraudolph (2004) was not further refined and hardly explored in recent works.
Differentiable MI estimation. In contrast, there has been a recent surge on new methods for the estimation of the closely related MI between two random variables. The most prominent examples include unnormalized energy-based variational lower bounds (Poole et al., 2019), the lower bounds developed in Nguyen et al. (2010) using variational characterization of f-divergence, the MINEestimator developed in Belghazi et al. (2018) from the Donsker-Varadhan representation of MI which can be also interpreted as an improvement of the plug-in estimator of Suzuki et al. (2008), the noise-contrastive based bound developed in van den Oord et al. (2018) and finally a contrastive upper bound (Cheng et al., 2020a). McAllester & Stratos (2020) point out shortcomings in other estimation strategies and introduce their own Differences of Entropies (DOE) method.
2 KNIFE
In this section we identify limitations of existing entropy estimators introduced in Schraudolph (2004); McAllester & Stratos (2020). Subsequently, we present KNIFE, which addresses these shortcomings.
2.1 LIMITATIONS OF EXISTING DIFFERENTIAL ENTROPY ESTIMATORS
Consider a continuous random vector X ∼ p in Rd. Our goal is to estimate the DE h(X) := − ∫ p(x) log p(x) dx. Given the intractability of this integral, we will rely on a Monte-Carlo estimate of h(X), using N i.i.d. samples Dx = {xn}Nn=1 to obtain
ĥORACLE(Dx) := − 1
N N∑ n=1 log p(xn). (1)
Unfortunately, assuming access to the true density p is often unrealistic, and we will thus construct an estimate p̂ that can then be plugged into (1) instead of p. If p̂ is smooth, the resulting plug-in estimator of DE is differentiable (R1).
Assuming access to an additional—ideally independent—set of M i.i.d. samples E = {x′m}Mm=1, we build upon the Parzen-Rosenblatt estimator (Rosenblatt, 1956; Parzen, 1962)
p̂(x;w, E) = 1 wdM M∑ m=1 κ ( x− x′m w ) , (2)
where w > 0 denotes the bandwidth and κ is a kernel density. The resulting entropy estimator when replacing p in (1) by (2) was analyzed in Ahmad & Lin (1976). In Schraudolph (2004), this approach was extended using the kernel estimator
p̂SCHRAU.(x; A, E) := 1
M M∑ m=1 κAm(x− x′m), (3)
where A := (A1, . . . , AM ) are (distinct, diagonal) covariance matrices and κA(x) = N (x; 0, A) is a centered Gaussian density with covariance matrix A.
The DOE method of McAllester & Stratos (2020) is a MI estimator that separately estimates a DE and a conditional DE. For DE, a simple Gaussian density estimate p̂DOE(x;θ) = κA(x− µ) is used, where θ = (A,µ) are the training parameters, the diagonal covariance matrix A and the mean µ.
While both SCHRAU. and DOE yield differentiable plug-in estimators for DE, they each have a major disadvantage. The strategy of Schraudolph (2004) fixes the kernel mean values at E , which implies that the method cannot adapt to a shifting input distribution (R3). On the other hand, DOE allows for rapid adaptation, but its simple structure makes it inadequate for the DE estimation of multi-modal densities. We illustrate these limitations in Section 3.1.
2.2 KNIFE ESTIMATOR
In KNIFE, the kernel density estimate is given by
p̂KNIFE(x;θ) := M∑ m=1 umκAm(x− am), (4)
where θ := (A,a,u) and the additional parameters 0 ≤ u = (u1, u2, . . . , uM ) with 1 · u = 1 and a = (a1, . . . , aM ) are introduced. Note that p̂KNIFE(x;θ) is a smooth function of θ, and so is our proposed plug-in estimator
ĥKNIFE(Dx;θ) := − 1
N N∑ n=1 log p̂KNIFE(xn;θ). (5)
KNIFE combines the ideas of Schraudolph (2004); McAllester & Stratos (2020). It is differentiable and able to adapt to shifting input distributions, while capable of matching multi-modal distributions. Thus, as we will see in synthetic experiments, incorporating um and shifts am in the optimization enables the use of KNIFE in non-stationary settings, where the distribution of X evolves over time.
Learning step: Stemming from the observation that, by the Law of Large Numbers (LLN),
ĥKNIFE(Dx,θ) LLN ≈ −E [ log p̂KNIFE(X;θ) ] = h(X) + DKL(p‖p̂KNIFE( · ;θ)) ≥ h(X), (6)
we propose to learn the parameters θ by minimizing ĥKNIFE, where E may be used to initialize a. Although not strictly equivalent due to the Monte-Carlo approximation, minimizing ĥKNIFE can be understood as minimizing the Kullback-Leibler (KL) divergence in (6), effectively minimizing the gap between ĥKNIFE and h(X). In fact, ĥKNIFE can also be interpreted as the standard maximum likelihood objective, widely used in modern machine learning. It is worth to mention that the KNIFE estimator is fully differentiable with respect to θ and the optimization can be tackled by any gradient-based method (e.g., Adam (Kingma & Ba, 2014) or AdamW (Loshchilov & Hutter, 2017)).
2.3 CONVERGENCE ANALYSIS
Note that the classical Parzen-Rosenblatt estimator ĥ(Dx;w), where (2) is plugged into (1), is a special case of KNIFE. Thus, the convergence analysis provided in (Ahmad & Lin, 1976, Theorem 1) also applies and yields sufficient conditions for ĥKNIFE(Dx,θ)→ h(X). In Appendix C, we extend this result and, assuming that the underlying distribution p is compactly supported on X = [0, 1]d and L-Lipschitz continuous, the following theorem is proved. Theorem 1. For any δ > 0, there exists a function ε(N,M,w) such that, with probability at least 1− δ,
∣∣ĥ(Dx;w)−h(X)∣∣ ≤ ε(N,M,w). Additionally, ε(N,M,w)→ 0 as M,N →∞ and w → 0 provided that
Nw → 0 and N 2 logN
w2dM → 0, (7)
where M and N denote the number of samples in E and Dx, respectively.
The precise assumptions for Theorem 1 and an explicit formula for ε(N,M,w) are given in Theorem 2 in Appendix C. For instance, Theorem 1 provides a bound on the speed of convergence for the consistency analysis in (Ahmad & Lin, 1976, Theorem 1).
2.4 ESTIMATING CONDITIONAL DIFFERENTIAL ENTROPY AND MUTUAL INFORMATION
Similar to (McAllester & Stratos, 2020), the proposed DE estimator can be used to estimate other information measures. In particular, we can use KNIFE to construct estimators of conditional DE and MI. When estimating the conditional DE and MI for a pair of random variables (X,Y ) ∼ p, we not only use Dx = {xn}Nn=1, but also the according i.i.d. samples Dy = {yn}Nn=1, where (xn, yn) are drawn according to p.
Conditional Differential Entropy. We estimate conditional DE h(X|Y ) by considering θ to be a parameterized function Θ(y) of y. Then all relations previously established naturally generalize and
p̂KNIFE(x|y; Θ) := p̂KNIFE(x; Θ(y)), ĥKNIFE(Dx|Dy; Θ) := − 1
N N∑ n=1 log p̂KNIFE(xn|yn; Θ). (8)
Naturally, minimization of (6) is now performed over the parameters of Θ. If Y is a continuous random variable, we use an artificial neural network Θ(y), taking y as its input. On the other hand, if Y ∈ Y is a discrete random variable, we have one parameter θ for each y ∈ Y , i.e., Θ = {θy}y∈Y and p̂KNIFE(x|y; Θ) = p̂KNIFE(x; Θ(y)) = p̂KNIFE(x;θy).
Mutual Information. To estimate the MI between random variables X and Y (either discrete or continuous), recall that MI can be written as I(X;Y ) = h(X) − h(X|Y ). Therefore, we use the marginal and conditional DE estimators (5) and (8) to build a KNIFE-based MI estimator
ÎKNIFE(Dx,Dy;θ,Θ) := ĥKNIFE(Dx;θ)− ĥKNIFE(Dx|Dy; Θ). (9)
3 EXPERIMENTS USING SYNTHETIC DATA
3.1 DIFFERENTIAL ENTROPY ESTIMATION
In this section we apply KNIFE for DE estimation, comparing it to (3), the method introduced in Schraudolph (2004), subsequently labeled “SCHRAU.”. It is worth to mention that we did not perform the Expectation Maximization algorithm, as suggested in (Schraudolph, 2004), but instead opted to use the same optimization technique as for KNIFE to facilitate a fair comparison.
3.1.1 GAUSSIAN DISTRIBUTION
As a sanity check, we test KNIFE on multivariate normal data in moderately high dimensions, comparing it to SCHRAU. and DOE, which we trained with the exact same parameters. We performed these experiments with d = 10 and d = 64 dimensional data. KNIFE yielded the lowest bias and variance in both cases, despite DOE being perfectly adapted to matching a multivariate Gaussian distribution. Additional details can be found in Appendix A.1.
In order to use a DE estimation primitive in a machine learning system, it must be able to adapt to a changing input distribution during training (R3). As already pointed out in Section 2.1, this is a severe limitation of SCHRAU., as re-drawing the kernel support E can be either impractical or at the very least requires a complete re-training of the entropy estimator. Whereas in (4), the kernel support a is trainable and it can thus adapt to a change of the input distribution. In order to showcase this ability, we utilize the approach of Cheng et al. (2020a) and successively decrease the entropy, observing how the estimator adapts. We perform this experiment with data of dimension d = 64 and repeatedly multiply the covariance matrix of the training vectors with a factor of a = 12 . The resulting entropy estimation is depicted in Figure 1. It is apparent that SCHRAU. suffers from a varying bias. The bias increases with decreasing variance, as the kernel support is fixed and cannot adapt as the variance of Dx shrinks. DOE is perfectly adapted to a single Gaussian distribution and performs similar to KNIFE.
3.1.2 TRIANGLE MIXTURE
KNIFE is able to cope with distributions that have multiple modes. While (3) is also capable of matching multi-modal distributions, DOE is unable to do so, as it approximates any distribution with a multivariate Gaussian. We illustrate this by matching a mixture of randomly drawn triangle distributions. The resulting estimated PDFs as well as the ground truth when estimating the entropy of a 1-dimensional mixture of triangles with 10 components can be observed in Figure 2 (left). With increasing dimension the difficulty of this estimation rises quickly as in d dimensions, the resulting PDF of independent c-component triangle mixtures has cd modes. To showcase the performance of KNIFE in this challenging task, we ran 10 training runs for DE estimation of 2-component triangle mixtures in 8 dimensions. An example training run is depicted in Figure 2 (right).
3.2 MUTUAL INFORMATION ESTIMATION
Multivariate Gauss We repeat the experiments in (Cheng et al., 2020a), stepping up the MI I(Xd;Y d) between d i.i.d. copies of joint normal random variables (X,Y ) by increasing their correlation coefficient, i.e., (X,Y ) are multivariate Gaussian with correlation coefficient ρi in the i-th epoch. A training run is depicted in the top of Figure 3. As in (Cheng et al., 2020a), we also repeat the experiment, applying a cubic transformation to Y . The estimation of MI between d i.i.d. copies of X and Y 3 can be observed in the middle row of Figure 3. The MI is unaffected by this bijective transformation. In Appendix A.3, the bias and variance are depicted separately.
Sum of Uniformly Distributed Variables In order to test the ability of KNIFE to adapt to distributions substantially different from the Gaussian kernel shape, we apply it in MI estimation of I(Xd;Y d) with uniformly distributed data. To this end, let X and E be centered, uniformly distributed random variables with E[X2] = E[E2] = 1 and define Y = ρiX+ √ 1− ρ2iE in the i-th epoch. One training run with d = 20 is shown in Figure 3 (bottom). Details about the source distribution as well as details of the experiments can be found in Appendix A.3.
4 EXPERIMENTS ON NATURAL DATA
In this section, we benchmark our proposed KNIFE-based MI estimator on three practical applications, spanning textual and visual data. We reproduce and compare our method to the most recent MI estimators including MINE (Belghazi et al., 2018), NWJ (Nguyen et al., 2010), InfoNCE (van den Oord et al., 2018), CLUB (Cheng et al., 2020a), and DOE (McAllester & Stratos, 2020). We do not explicitly include the SMILE estimator Song & Ermon (2019) in our comparison as it has the same gradient as NWJ.
Common notation: In all following applications, we will use Φψ : X → Z to denote an encoder, where X is the raw input space (i.e., texts or images), and Z denotes a lower dimensional continuous feature space. Additionally, we will use Cψ : Z → Y to denote a shallow classifier from the latent space Z to a discrete or continuous target space Y for classification or regression, respectively. We will use ψ to denote the parameters of both models, Φψ and Cψ . CE denotes the cross entropy loss.
4.1 INFORMATION BOTTLENECK FOR LANGUAGE MODEL FINETUNING
IB has recently been applied to fine-tune large-scale pretrained models (Mahabadi et al., 2021) such as BERT (Devlin et al., 2018) and aims at suppressing irrelevant features in order to reduce overfitting.
Problem statement. Given a textual input X ∈ X and a target label Y ∈ Y , the goal is to learn the encoder Φψ and classifier Cψ, such that Φψ(X) retains little information about X , while still producing discriminative features, allowing the prediction of Y . Thus, the loss of interest is:
L = λ · I(Φψ(X);X)︸ ︷︷ ︸ compression term − I(Φψ(X);Y )︸ ︷︷ ︸ downstream term , (10)
where λ controls the trade-off between the downstream and the compression terms.
Setup. Following Mahabadi et al. (2021) (relying on VUB), we work with the VIBERT model, which uses a Gaussian distribution as prior. Φψ is implemented as a stochastic encoder Φψ(X) = Z ∼ N (µψ(X),Σψ(X)). Details on the architecture of µψ and Σψ can be found in Appendix B. The classifier Cψ is composed of dense layers. To minimize L, the second part of the objective (10) is bounded using the variational bound from Barber & Agakov (2003). Since we use a Gaussian prior, h(Z|X) can be expressed in closed form.1 Thus, when using KNIFE, I(X;Z) = h(Z) − h(Z|X) can be estimated by using ĥKNIFE to estimate h(Z). We compare this KNIFE-based MI estimator with aforementioned MI estimators and the variational upper bound (VUB). For completeness, we also compare against a BERT model trained by direct minimization of a CE loss.
We closely follow the protocol of (Mahabadi et al., 2021) and work on the GLUE benchmark (Wang et al., 2018) originally composed of 5 datasets. However, following (Mahabadi et al., 2021), we choose to finetune neither on WNLI (Morgenstern & Ortiz, 2015) nor on CoLA (Warstadt et al., 2019) due to reported flaws in these datasets. The evaluation is carried out on the standard validation splits as the test splits are not available. Following standard practice (Liu et al., 2019; Yang et al., 2019), we report the accuracy and the F1 for MRPC, the accuracy for RTE and the Pearson and Spearman correlation coefficient for STS-B.
Results. Table 1 reports our results on the GLUE benchmark. We observe that KNIFE obtains the best results on all three datasets and the lowest variance on MRPC and STS-B. The use of a Gaussian prior in the stochastic encoder Φψ could explain the observed improvement of KNIFE-based estimation over MI-estimators such as CLUB, InfoNCE, MINE, DOE, or NWJ.
4.2 FAIR TEXTUAL CLASSIFICATION
In fair classification, we would like the model to take its decision without utilizing private information such as gender, age, or race. For this task, MI can be minimized to disentangle the output of the encoder Z and a private label S ∈ S (e.g., gender, age, or race).
1h(Z|X) = 1 2 ln |Σψ(X)|+ d2 ln(2πe), where d is the dimension of X and | · | denotes the determinant.
downstream task
+λ · I(Φψ(X);S)︸ ︷︷ ︸
disentangled
, (11)
where λ controls the trade-off between minimizing MI and CE loss. In this framework, a classifier is said to be fair or to achieve perfect privacy if no statistical information about S can be extracted from Φψ(X) by an adversarial classifier. Overall, a good model should achieve high accuracy on the main task (i.e., prediction of Y ) while removing information about the protected attribute S. This information is measured by training an offline classifier to recover the protected attribute S from Φψ(X).
Setup. We compute the second term of (11) with competing MI estimators, as well as the model from Elazar & Goldberg (2018), which will be referred to as “Adv”, as it utilizes an adversary to recover the private label from the latent representation Z. For KNIFE-based MI estimation, we use two DE estimators (as S is a binary label), following the approach outlined in Section 2.4. All derivations are detailed in Appendix B. We follow the experimental setting from Elazar & Goldberg (2018); Barrett et al. (2019) and use two datasets from the DIAL corpus (Blodgett et al., 2016) (over 50 million tweets) where the protected attribute S is the race and the main labels are sentiment or mention labels. The mention label indicates whether a tweet is conversational or not. We follow the official split using 160 000 tweets for training and two additional sets composed of 10 000 tweets each for development and testing. In all cases, the labels S and Y are binary and balanced, thus a random guess corresponds to 50% accuracy.
Results. Figure 4 gathers results on the fair classification task. The upper dashed lines represent the (private and main) task accuracies when training a model with only the CE loss (case λ = 0 in (11)). This shows that the learned encoding Φψ(X) contains information about the protected attribute, when training is only performed for the main task. On both the sentiment and mention task, we observe that a KNIFE-based estimator can achieve perfect privacy (see Figures 4b and 4d) with nearly no accuracy loss in the main task (see Figures 4a and 4c). The other MI estimators exhibit different behavior. For sentiment labels, most MI estimators fail to reach perfect privacy (CLUB, NWJ, DOE, and Adv) while others (InfoNCE) achieve perfect privacy while degrading the main task accuracy (10% loss on main accuracy). For mention labels, CLUB can also reach perfect privacy with almost no degradation of the accuracy of the main task. Overall, it is worth noting that KNIFE-based MI estimation enables better control of the degree of disentanglement than the reported baselines.
4.3 UNSUPERVISED DOMAIN ADAPTATION
In unsupervised domain adaptation, the goal is to transfer knowledge from the source domain (S) with a potentially large number of labeled examples to a target domain (T ), where only unlabeled examples are available.
Problem Statement. The learner is given access to labeled images from a source domain (xs, y) ∼ (XS , Y ) ∈ XS × Y and unlabeled images from a target domain xt ∼ XT ∈ XT . The goal is to
learn a classification model {Φψ, Cψ} that generalizes well to the target domain. Training models on the supervised source data only results in domain-specific latent representations Φψ(X) leading to poor generalization (when X is chosen randomly from {XS , XT }). In order to make the latent representations as domain-agnostic as possible, we follow the information-theoretic method proposed by Gholami et al. (2020), and used in Cheng et al. (2020a). The idea is to learn an additional binary model {Φdν , Cdν}, whose goal it is to guess the domain D ∈ {0, 1} of X . The latent representation learned by Φdν will therefore contain all the domain-specific information that we would like the main encoder Φψ to discard. In other words, we would like Φψ(X) and Φdν(X) to be completely disentangled, which naturally corresponds to the minimization of I(Φψ(X); Φdν(X)). Concretely, the domain classifier is trained to minimize the CE between domain labels D and its own predictions, whereas the main classifier is trained to properly classify support samples while minimizing the MI between Φψ(X) and Φdν(X). Using f d ν := C d ν ◦ Φdν and fψ := Cψ ◦ Φψ , the objectives are
min ν CE(D; fdν (X)) and min ψ CE(Y ; fψ(XS)) + λ · I(Φψ(X); Φdν(X)). (12)
Setup. The different MI estimators are compared based on their ability to guide training by estimating I(Φψ(X); Φdν(X)) in (12). We follow the setup of Cheng et al. (2020a) as closely as possible, and consider a total of 6 source/target scenarios formed with MNIST (LeCun & Cortes, 2010), MNIST-M (Ganin et al., 2016), SVHN (Netzer et al., 2011), CIFAR-10 (Krizhevsky et al., 2009), and STL-10 (Coates et al., 2011) datasets. We reproduce all methods and allocate the same budget for hyper-parameter tuning to every method. The exhaustive list of hyper-parameters can be found in Appendix B.
Results. Results are presented in Table 2. The KNIFE-based estimator is able to outperform MI estimators in this challenging scenario where both Φψ(X) and Φdν(X) are continuous.
5 CONCLUDING REMARKS
We introduced KNIFE, a fully learnable, differentiable kernel-based estimator of differential entropy, designed for deep learning applications. We constructed a mutual information estimator based on KNIFE and showcased several applications. KNIFE is a general purpose estimator and does not require any special properties of the learning problem. It can thus be incorporated as part of any training objective, where differential entropy or mutual information estimation is desired. In the case of mutual information, one random variable may even be discrete.
Despite the fundamental challenges in the problem of differential entropy estimation, beyond limitations arising from the use of a finite number of samples, KNIFE has demonstrated promising empirical results in various representation learning tasks.
Future work will focus on improving the confidence bounds given in Theorem 1. In particular, tailoring them towards KNIFE using tools from (Birge & Massart, 1995; Singh & Poczos, 2014). Another potential extension is direct estimation of the gradient of entropy, when p̂KNIFE(x;θ) has been learned (Mohamed et al., 2020; Song et al., 2020). This could be applied after the learning phase of KNIFE and is left for future work.
APPENDIX
A EXPERIMENTAL DETAILS OF EXPERIMENTS WITH SYNTHETIC DATA
Implementation of KNIFE in PyTorch (Paszke et al., 2019) is rather straightforward. The constraint on the weights u can be satisfied by applying a softmax transformation. The covariance matrices were parameterized by the lower-triangular factor in the Cholesky decomposition of the precision matrices, guaranteeing the definiteness constraint to be satisfied.
A.1 DIFFERENTIAL ENTROPY ESTIMATION OF GAUSSIAN DATA
In Section 3.1.1, the estimation of the entropy h(X) = d2 log 2πe for X ∼ N (0, Id) was performed with the hyperparameters given in Table 3. The mean error and its empirical standard deviation are reported in Table 5 over 20 runs, where an independently drawn evaluation set with the same size as the training set is used. At d = 10 we have the entropy h = d2 log 2πe = 14.19, while for the higher dimension, d = 64 we find h = 90.81.
In the experiment depicted in Figure 1, entropy is decreased after every epoch by letting Xi ∼ N (0, aiId), where i = 0, . . . , 4 is the epoch index. That is, Xi = √ aiGd, where G is a standard normal random variable, resulting in an decrease of the DE by ∆ = −d2 log a ≈ 22.18 for a = 1 2 with every epoch. We start at h(X0) = d2 log 2πe ≈ 90.81 and successively decrease until h(X4) = h(X0) + 4∆ ≈ 2.1. Additional parameters can be found in Table 4.
Computational Resources. Training was performed on an NVidia V100 GPU. Taken together, training for the first experiments of entropy estimation in dimensions d = 10, 64, as well as the experiment depicted in Figure 1 used GPU time of less than 5 minutes.
A.2 DIFFERENTIAL ENTROPY ESTIMATION OF TRIANGLE MIXTURES
In Section 3.1.2, we perform an estimation of the entropy of c-component triangle mixture distributions. The PDF of such a c-component triangle-mixture, is given by
p(x) = c∑ i=1 wiΛsi ( x− i− 1 2 ) , (13)
where Λs(x) := 1s max{0, 2 − 4s|x|} is a centered triangle PDF with width s > 0. The scales s = (s1, . . . , sc) and weights w = (w1, . . . , wc) satisfy 0 < si, wi < 1 and ∑c i=1 wi = 1. Before the experiment, we choose w uniformly at random from the c-probability simplex and the scales are chosen uniformly at random in [0.1, 1.0]. An example for c = 10 is the true PDF depicted in Figure 2
(left). For d > 1, we perform the estimation on d i.i.d. copies. Note that the triangle mixture with c components in d-dimensional space has cd modes, i.e., the support can be partitioned into cd disjoint components.
The parameters of the experiment yielding Figure 2 (left) are given in Table 6, while the details of the experiment depicted in Figure 2 (right) can be found in Table 7. In the latter experiment, over ten runs, entropy was estimated to an accuracy of 1.6563± 0.8528 by KNIFE, accurate to 2.4445± 0.5439 using (3) and with an accuracy of 7.1070± 2.7984 by DOE. This is the mean absolute error and its empirical standard deviation over all 10 runs, where the evaluation set was drawn independently from the training set and has the same size as the training set.
Computational Resources. Training was performed on an NVidia V100 GPU. Training in d = 1 dimension, that resulted in Figure 2 (left) can be performed in seconds, while all training required for producing Figure 2 (right) used approximately 1.5 hours of GPU time.
A.3 MUTUAL INFORMATION ESTIMATION
In Section 3.2, we estimate I(Xd;Y d) and I(Xd; (Y 3)d) where (X,Y ) are multivariate correlated Gaussian distributions with correlation coefficient ρi in the i-th epoch. Subsequently, we estimate I(Xd;Y d) where X,E ∼ U [− √ 3, √ 3] are independent and Y is given by Y = ρiX + √
1− ρ2iE. In both cases, ρi is chosen such that I(Xd;Y d) = 2i in the i-th epoch.
All neural networks are randomly initialized. The bias, variance, and MSE during training as a function of the MI, can be observed in Figure 5.
The estimation is performed in 10 runs, randomly choosing the training meta-parameters as proposed by McAllester & Stratos (2020). In Figure 3 (bottom), we present the best run for each method, selected by distance from the true MI at the end of training. The bias, variance, and MSE during training, as a function of the MI, can be observed in Figure 6. Details about the source distribution as well as details of the experiments can be found in Table 8. During experimentation it turned out to be
beneficial to train the parameters Θ and θ in (9) separately and substantially increase the learning rate for the training of θ. Thus, we increase the learning rate for the training of θ by a factor of 103.
Model Architecture for Θ. We utilize the feed-forward architecture, also used in McAllester & Stratos (2020). It is a simple architecture with two linear layers, one hidden layer using tanh activation, immediately followed by an output layer. The number of neurons in the hidden layer is a meta-parameter selected randomly from {64, 128, 256} for each training run. Three models with this architecture are used for the three parameters (A,a,u), as described by (4), where only the output dimension is changed to fit the parameter dimension.
Computational Resources. Training was performed, using about 6 hours of GPU time on an NVidia V100 GPU to carry out the experiment depicted in Figure 3 (bottom).
B EXPERIMENTAL DETAILS OF EXPERIMENTS ON NATURAL DATA
B.1 ON THE PARAMETER UPDATE
In Section 4, we rely on two different types of models: pretrained (e.g., fine tuning with VIBERT) and randomly initialized (e.g., in fair classification and domain adaptation). When working with randomly initialized networks the parameters are updated. However, it is worth noting that in the literature the pretrained model parameters (i.e. ψ) are not always updated (see Ravfogel et al. (2020)). In our experiments: (i) We always update the parameters (even for pretrained models), and (ii) we did not change the way the parameters were updated in concurrent works (to ensure fair comparison). Specifically,
• for language model finetuning (Appendix B.2), we followed Mahabadi et al. (2021) and did a joint update;
• for the fair classification task (Appendix B.3), we followed common practice and used the algorithm described in Algorithm 1 which rely on an alternated update;
• for the domain adaptation task (Appendix B.4), we followed common practice and used a joint method.
B.2 INFORMATION BOTTLENECK FOR LANGUAGE MODEL FINETUNING
For this experiment we follow the experimental setting introduced in Mahabadi et al. (2021) and work with the GLUE data2.
Model Architecture. We report in Table 9, the multilayer perceptron (MLP) used to compute the compressed sentence representations produced by BERT. Variance and Mean MLP networks are composed of fully connected layers.
2see https://gluebenchmark.com/faq
Algorithm 1 Disentanglement using a MI-based regularizer 1: INPUT Labelled training set D = {(xj , sj , yj)∀j ∈ [n+ 1, N ]}; independent set of samples E ; θ parameters KNIFE; ψ parameters of network.
2: INITIALIZE parameters θ, ψ 3: OPTIMIZATION 4: while (θ, ψ) not converged do 5: for i ∈ [1,Unroll] do . Learning Step for KNIFE 6: Sample a batch B from E 7: Update θ using ((9)). 8: end for 9: Sample a batch B′ from D
10: Update θ with B′ ((11)). 11: end while 12: OUTPUT Encoder and classifier weights ψ
Table 10: Experimental details on Information Bottleneck.
Parameter Value
Learning Rate See Appendix B.2 Optimizer AdamW Warmup Steps 0.0 Dropout 0.0
Batch Size 32
Model Training. For model training, all models are trained for 6 epochs and we use early stopping (best model is selected on validation set error). For IB, λ is selected in {10−4, 10−5, 10−6} and K is selected in {144, 192, 288, 384}. We follow (Alemi et al., 2016) where the posterior is averaged over 5 samples and a linear annealing schedule is used for λ. Additional hyper-parameters are reported in Table 10.
Dataset Statistics. Table 11 reports the statistics of the dataset used in our finetuning experiment.
Computational Resources. For all these experiments we rely on NVidia-P100 with 16GB of RAM. To complete the full grid-search on 10 seeds and on the three datasets, approximately 1.5k hours are required.
B.3 FAIR TEXTUAL CLASSIFICATION
In this section, we gather the experimental details for the textual fair classification task.
B.3.1 DETAILS OF THE KNIFE-BASED ESTIMATOR
In this experiment, we estimate the MI between a continuous random variable, namely Z = Φψ(X), and a discrete variable, denoted by S ∈ S = {1, 2, . . . , |S|}. We follow the strategy outlined in Section 2.4 for estimating the conditional DE h(Z|S). However, we will reuse the estimate of the conditional PDF p̂(z|s; Θ) to compute an estimate of the DE as
h(Z) ≈ − 1 N N∑ n=1 log (∑ s∈S p̂KNIFE(zn|s; Θ)p̂(s) ) , (14)
where p̂(s) = 1N |{n : sn = s}| is used to indicate the empirical distribution of S in the training set Ds.3 In our experiments, with |S| = 2, we found that estimating the DE h(Z) based on the KNIFE estimator learnt for h(Z|S) increases the stability of training. We adopted the same strategy for DOE.
B.3.2 EXPERIMENTAL DETAILS
Model Architecture. For the encoder, we use a bidirectionnal GRU with two layers with hidden and input dimension set to 128. We use LeakyReLU as the activation function. The classification head is composed of fully connected layers of input dimension 256. We use a learning rate of 0.0001 for AdamW. The dropout rate is set to 0.2. The number of warmup steps is set to 1000.
3As we work with balanced batches, we will have p̂(s) = 1|S| .
Computational Resources. For all these experiments, we rely on NVIDIA-P100 with 16GB of RAM. Each model is trained for 30k steps. The model with the lowest MI is selected. The training of a single network takes around 3 hours.
B.4 UNSUPERVISED DOMAIN ADAPTATION
We follow the experimental setup given in Cheng et al. (2020a) as closely as possible, i.e., we pick hyperparameters given in the paper, or if not provided, those set in the code:4
Model Training. We use Adam optimizer for all modules with a learning rate of 0.001. Batch size is set to 128. We set the weighting parameter λ = 0.1. The original code of Cheng et al. (2020a) uses 15 000 training iterations, but we found most methods had not properly converged at this stage, and hence use 25 000 iterations instead. Similar to other experiments, we set the kernel size M = 128.
Model Architecture. Table 12 summarizes the architectures used for the different modules. For the MI network of each method, the best configuration, based on the validation set of the first task MNIST→MNIST-M, is chosen among 4 configurations: with or without LayerNorm and with ReLU or tanh activation.
Computational Resources. For these experiments, we used a cluster of NVIDIA-V100 with 16GB of RAM. Each training (i.e., 25k iterations) on a single task requires on average 2 hours. Given that we have 6 tasks, and repeat the training for 3 different seeds, on average 36 hours computation time is required for each method.
C BOUNDING THE ERROR
In the following, fix L > 0 and let PL be the set of L-Lipschitz PDFs supported5 on X := [0, 1]d, i.e., ∫ X p(x) dx = 1, and
∀x, y ∈ Rd : |p(x)− p(y)| ≤ L‖x− y‖ (15) for p ∈ PL, where6 ‖x‖ := ∑ k |xk|.
Assume p ∈ PL and let κ be a PDF supported on X . In order to show that estimation of h(X) is achievable, we use a standard Parzen-Rosenblatt estimator p̂(x;w) := 1
Mwd ∑M m=1 κ (x−X′m w ) , as
in (2). The entropy estimate is then defined by the empirical average
ĥ(Dx;w) := − 1
N N∑ n=1 log p̂(Xn;w). (16)
Further, define the following quantities, which are assumed to be finite:
pmax := max{p(x) : x ∈ X}, (17)
C1 :=
∫ p(x) log2 p(x)dx, (18)
C2 := L
∫ ‖u‖κ(u)du, (19)
Kmax := max{κ(x) : x ∈ X}. (20) Note that it is easily seen that pmax ≤ L2 and C1 ≤ max { pmax log 2 pmax, 4e −2} by our assumptions. The requirement C2,Kmax <∞ represents a mild condition on the kernel function κ. We can now show the following.
4https://github.com/Linear95/CLUB/tree/master/MI_DA. 5Any known compact support suffices. An affine transformation then yields X = [0, 1]d, while possibly resulting in a different Lipschitz constant. 6The `1 norm is chosen to facilitate subsequent computations. By the equivalence of norms on Rd, any norm suffices.
Convolution sequence
Noisy downsampling
Theorem 2. With probability greater than 1− δ we have
|h(X)− ĥ(Dx;w)| ≤ − log 1− 3NKmax wdδ √ log 6Nδ 2M − 3NC2w δ +√3C1 Nδ , (21)
if the expression in the logarithm is positive.
In particular, the estimation error approaches zero asN →∞ ifw = w(N)→ 0,M = M(N)→∞ are chosen such that
Nw → 0, (22) N2 logN
w2dM → 0. (23)
We prove Theorem 2 in several Lemmas.
Lemma 3. Fix δ > 0 and x0 ∈ X . Then, with probability greater than 1− δ,
|p(x0)− p̂(x0)| ≤ Kmax wd √ log 2δ 2M + C2w. (24)
Proof. First, we can show that
|E[p̂(x0)]− p(x0)| = ∣∣∣∣∣ 1Mwd M∑ m=1 ∫ κ ( x0 − x w ) p(x)dx− p(x0) ∣∣∣∣∣ (25) = ∣∣∣∣ 1wd ∫ κ ( x0 − x w ) p(x)dx− p(x0)
∣∣∣∣ (26) =
∣∣∣∣∫ κ (u) p(x0 − wu)du− p(x0)∣∣∣∣ (27) =
∣∣∣∣∫ κ (u) [p(x0 − wu)− p(x0)]du∣∣∣∣ (28) ≤ ∫ κ (u) |p(x0 − wu)− p(x0)|du (29)
≤ ∫ κ (u)Lw‖u‖du (30)
= wC2. (31)
Next, note that
|E[p̂(x0)]− p̂(x0)| ≤ Kmax wd √ log 2δ 2M
(32)
holds with probability greater than 1− δ as the requirements of McDiarmid’s inequality (Paninski, 2003, Sec. 3) are satisfied with cj = KmaxMwd and thus P{|E[p̂(x0)]− p̂(x0)| ≥ ε} ≤ δ with
ε = Kmax wd √ log 2δ 2M . (33)
Combining (31) and (32) gives (24).
Lemma 4. For any continuous random variable X supported on X and a ≥ 0, we have
P{p(X) ≤ a} ≤ a. (34)
Proof. We apply Markov’s inequality to the random variable Y = 1p(X) and observe that
P{p(X) ≤ a} = P{Y ≥ a−1} ≤ vol(X )a = a. (35)
Lemma 5. If x > 0, y ≥ a > 0, 0 < a < 1, and |x− y| ≤ δ < a, then
| log x− log y| ≤ log a a− δ
= − log (
1− δ a
) . (36)
Proof. Case x ≥ y. We can write y = a+ b and x = y+ c = a+ b+ c for b ≥ 0 and 0 ≤ c ≤ δ < a.∣∣∣∣log xy ∣∣∣∣ = log(1 + ca+ b ) (37)
≤ log ( 1 + c
a
) ≤ log ( 1 + δ
a
) . (38)
Furthermore,
log
( a
a− δ
) − log ( 1 + δ
a
) = log
1
(a+ δ)(a− δ) (39)
= log 1
a2 − δ2 (40)
≥ log 1 a2 = −2 log a > 0. (41)
Case x < y. Here, we can write y = a+ b and x = y − c = a+ b− c for b ≥ 0 and 0 ≤ c ≤ δ < a.∣∣∣∣log xy ∣∣∣∣ = log yx (42)
= log
( a+ b
a+ b− c
) (43)
≤ log ( a
a− c
) (44)
≤ log ( a
a− δ
) = − log ( 1− δ
a
) . (45)
Proof of Theorem 2. We apply Lemma 3 N times and use the union bound to show that with probability greater than 1− δ3 we have for every n ∈ [N ]
|p(Xn)− p̂(Xn)| ≤ Kmax wd
√ log 6Nδ
2M + C2w. (46)
Similarly, by Lemma 4, we have with probability greater than 1− δ3 that
p(Xn) ≥ δ
3N (47)
for all n ∈ [N ].
Again by the union bound, we have that with probability greater than 1− 2δ3 both (46) and (47) hold for all n ∈ [N ], and thus, by Lemma 5, we obtain∣∣∣∣∣ĥ(Dx;w) + 1N N∑ n=1 log p(Xn) ∣∣∣∣∣ = ∣∣∣∣∣ 1N N∑ n=1 log p(Xn) p̂(Xn)
∣∣∣∣∣ (48) ≤ − log 1− Kmaxwd √ log 6Nδ 2M + C2w δ
3N (49) = − log 1− 3NKmax wdδ √ log 6Nδ 2M − 3NC2w δ , (50)
provided the argument in the logarithm is positive. Finally, we have the upper bound on the variance
E (h(X) + 1 N N∑ n=1 log p(Xn) )2 = 1 N2 N∑ n=1 E[(h(X) + log p(X))2] (51)
= 1
N (E[log2 p(X)]− h(X)2) (52) ≤ 1 N C1 (53)
and apply Chebychev’s inequality, showing that with probability greater than 1− δ3 ,∣∣∣∣∣h(X) + 1N N∑ n=1 log p(Xn) ∣∣∣∣∣ ≤ √ 3C1 Nδ . (54)
The union bound and the triangle inequality applied to (50) and (54) yields the desired result.
D LIBRARIES USED
For our experiments, we built upon code from the following sources.
• VIBERT (Mahabadi et al., 2021) at github.com/rabeehk/vibert. • TRANSFORMERS (Wolf et al., 2019) at github.com/huggingface/transformers. • DOE (McAllester & Stratos, 2020) at github.com/karlstratos/doe. • SMILE (Song & Ermon, 2019) at github.com/ermongroup/smile-mi-estimator. • InfoNCE, MINE, NWJ, CLUB (Cheng et al., 2020a) at github.com/Linear95/CLUB. | 1. What is the focus of the paper regarding differential entropy and mutual information?
2. What are the strengths of the proposed approach, particularly its ease of implementation and experiment coverage?
3. What are the weaknesses of the paper, including lack of novelty, inadequate comparisons, and shallow discussions?
4. Do you have any concerns about the performance of the proposed method in high-dimensional settings?
5. How does the reviewer assess the clarity and implementability of the paper's content? | Summary Of The Paper
Review | Summary Of The Paper
This paper introduces a differentiable kernel-based estimator of differential entropy, named KNIFE.
KNIFE-based estimators can be applied to both conditional (on either discrete or continuous variables) differential entropy and mutual information. In essence, KNIFE leveraged a kernel-based nonparametric likelihood estimator for the plug-in estimate of differential entropy, where the basis and covariance parameters are learned via MLE. The proposed method is validated on high-dimensional synthetic data and guiding the training of neural networks for real-world tasks, including domain adaptation and fair learning.
Review
Strengths:
Estimation of (differential) entropy and the related mutual information are topics in machine learning of fundamental importance, which drive a wide range of applications.
The proposed method is relatively easy to implement.
Experiments on the real-world dataset have decent coverage of important applications concerning mutual information.
This paper is written with clarity and is fairly easy to follow.
Weaknesses:
Originality. There is not much novelty in this work. The proposed solution is a simple plug-in estimator based on adaptive kernel density estimation. The techniques used are very standard and there is no new theory developed.
Lack of comparisons, the following competing estimators should be covered in the discussion or compared in experiments.
Plug-in estimators based on likelihood-ratio estimates, like neural estimators (or JSD estimator) and ML/least-square estimators (see [1] and reference therein)
Nearest-neighbor estimator [2].
Lack of in-depth discussions. Variational schemes have been proposed to address the inadequacy of plug-in estimators in such settings (likelihood estimation for complex distributions in high-dimensions is a long-standing challenge in statistics and machine learning), at least the paper should expand the discussion on that point. The experiments have focused on low-dimensional setups, so the proposed method works okay. But performance on high dimensions random variables is unknown.
[1] Suzuki, Taiji, et al. "Approximating mutual information by maximum likelihood density ratio estimation." New challenges for feature selection in data mining and knowledge discovery. PMLR, 2008. [2] Berrett, Thomas B., Richard J. Samworth, and Ming Yuan. "Efficient multivariate entropy estimation via
k
-nearest neighbour distances." The Annals of Statistics 47.1 (2019): 288-318. |
ICLR | Title
KNIFE: Kernelized-Neural Differential Entropy Estimation
Abstract
Estimation of (differential) entropy and the related mutual information has been pursued with significant efforts by the machine learning community. To address shortcomings in previously proposed estimators for differential entropy, here we introduce KNIFE, a fully parameterized, differentiable kernel-based estimator of differential entropy. The flexibility of our approach also allows us to construct KNIFE-based estimators for conditional (on either discrete or continuous variables) differential entropy, as well as mutual information. We empirically validate our method on high-dimensional synthetic data and further apply it to guide the training of neural networks for real-world tasks. Our experiments on a large variety of tasks, including visual domain adaptation, textual fair classification, and textual fine-tuning demonstrate the effectiveness of KNIFE-based estimation.
1 INTRODUCTION
Learning tasks requires information (Principe et al., 2006) in the form of training data. Thus, information measures (Shannon, 1948) (e.g. entropy, conditional entropy and mutual information) have been a source of inspiration for the design of learning objectives in modern machine learning (ML) models (Linsker, 1989; Torkkola, 2006). Over the years, a plethora of estimators have been introduced to estimate the value of the aforementioned measures of information and they have been applied to many different problems, including information and coding theory, limiting distributions, model selection, design of experiment and optimal prior distribution, data disclosure, and relative importance of predictors (Ebrahimi et al., 2010). In these applications, traditional research focused on both developing new estimators and obtaining provable guarantees on the asymptotic behavior of these estimators (Liu et al., 2012; Verdú, 2019).
However, when used for training deep neural networks, additional requirements need to be satisfied. In particular, the estimator needs to be differentiable w.r.t. the data distribution (R1), computationally tractable (R2), and rapidly adapt to changes in the underlying distribution (R3). For instance, Mutual Information (MI), a fundamental measure of dependence between variables, only became a popular (standalone or regularizing) learning objective for DNNs once estimators satisfying the above requirements were proposed (Poole et al., 2019; Barber & Agakov, 2003). Although MI is notoriously difficult to estimate in high dimensions (Kraskov et al., 2004; Pichler et al., 2020; McAllester & Stratos, 2020), these estimators have demonstrated promising empirical results in unsupervised representation learning (Krause et al., 2010; Bridle et al., 1992; Hjelm et al., 2019; Tschannen et al., 2020), discrete/invariant representations (Hu et al., 2017; Ji et al., 2019), generative modelling (Chen et al., 2016; Zhao et al., 2017), textual disentangling (Cheng et al., 2020b; Colombo et al., 2021), and applications of the Information Bottleneck (IB) method (Mahabadi et al., 2021; Devlin et al., 2018; Alemi et al., 2016) among others. Compared to MI, Differential Entropy (DE) has received less attention from the ML community while also having interesting applications.
In this paper, we focus on the problem of DE estimation as this quantity naturally appears in many applications (e.g. reinforcement learning (Shyam et al., 2019; Hazan et al., 2019; Ahmed et al., 2019; Kim et al., 2019), IB (Alemi et al., 2016), mode collapse (Belghazi et al., 2018)). Traditional estimators of DE often violate at least one of the requirements (R1) – (R3) listed above (e.g. knearest neighbor based estimators violate (R1)). As a consequence, the absence of DE estimator for arbitrary data distributions forces deep learning researchers to either restrict themselves to special cases where closed-form expressions for DE are available (Shyam et al., 2019) or use MI as a proxy
(Belghazi et al., 2018). In this work, we introduce a Kernelized Neural dIFferential Entropy (KNIFE) estimator, that satisfies the aforementioned requirements and addresses limitations of existing DE estimators (Schraudolph, 2004; McAllester & Stratos, 2020). Stemming from recent theoretical insights (McAllester & Stratos, 2020) that justify the use of DE estimators as building blocks to better estimate MI, we further apply KNIFE to MI estimation. In the context of deep neural networks with high dimensional data (e.g. image, text), KNIFE achieves competitive empirical results in applications where DE or MI is required.
1.1 CONTRIBUTIONS
Our work advances methods in DE and MI estimation in several ways.
1. We showcase limitation of the existing DE estimators proposed in Schraudolph (2004); McAllester & Stratos (2020) with respect to desirable properties required for training deep neural networks. To address these shortcomings, we introduce KNIFE, a fully learnable kernel-based estimator of DE. The flexibility of KNIFE allows us to construct KNIFE-based estimators for conditional DE, conditioning on either a discrete or continuous random variable. 2. We prove learnability under natural conditions on the underlying probability distribution. By requiring a fixed Lipschitz condition and bounded support we are not only able to provide an asymptotic result, but also a confidence bound in the case of a finite training set. This extends the consistency result by Ahmad & Lin (1976). 3. We validate on synthetic datasets (including multi-modal, non-Gaussian distributions), that KNIFE addresses the identified limitations and outperforms existing methods on both DE and MI estimation. In particular, KNIFE more rapidly adapts to changes in the underlying data distribution. 4. We conduct extensive experiments on natural datasets (including text and images) to compare KNIFE-based MI estimators to most recent MI estimators. First, we apply KNIFE in the IB principle to fine-tune a pretrained language model. Using KNIFE, we leverage a closed-form expression of a part of the training objective and achieve the best scores among competing MI estimators. Second, on fair textual classification, the KNIFE-based MI estimator achieves near perfect disentanglement (with respect to the private, discrete label) at virtually no degradation of accuracy in the main task. Lastly, in the challenging scenario of visual domain adaptation, where both variables are continuous, KNIFE-based MI estimation also achieves superior results.
1.2 EXISTENT METHODS AND RELATED WORKS
DE estimation. Existing methods for estimating DE fit into one of three categories (Beirlant et al., 1997; Hlaváčková-Schindler et al., 2007; Verdú, 2019): plug-in estimates (Ahmad & Lin, 1976; Györfi & Van der Meulen, 1987), estimates based on sample-spacings (Tarasenko, 1968), and estimates based on nearest neighbor distances (Kozachenko & Leonenko, 1987; Tsybakov & Van der Meulen, 1996); (Berrett et al., 2019). Our proposed estimator falls into the first category and we will thus focus here on previous work using that methodology. Excellent summaries of all the available methods can be found in the works (Beirlant et al., 1997; Hlaváčková-Schindler et al., 2007; Wang et al., 2009; Verdú, 2019). In Ahmad & Lin (1976), a first nonparametric estimator of DE was suggested and theoretically analyzed. It builds on the idea of kernel density estimation using Parzen-Rosenblatt windowing (Rosenblatt, 1956; Parzen, 1962). More detailed analysis followed (Joe, 1989; Hall & Morton, 1993) but the estimator remained essentially unchanged. Unfortunately, this classical literature is mostly concerned with appropriate regularity conditions that guarantee asymptotic properties of estimators, such as (asymptotic) unbiasedness and consistency. Machine learning applications, however, usually deal with a fixed—often very limited—number of samples.
Differentiable DE estimation. A first estimator that employed a differential learning rule was introduced in Viola et al. (1996). Indeed, the estimator proposed therein is optimized using stochastic optimization, it only used a single kernel with a low number of parameters. An extension that uses a heteroscedastic kernel density estimate, i.e., using different kernels at different positions, has been proposed in Schraudolph (2004). Still the number of parameters was quite low and varying means in the kernels or variable weights were not considered. Although the estimation of DE remained a topic of major interest as illustrated by recent works focusing on special classes of distributions (Kolchinsky & Tracey, 2017; Chaubey & Vu, 2021) and nonparametric estimators (Sricharan et al., 2013; Kandasamy et al., 2015; Moon et al., 2021), the estimator introduced in Schraudolph (2004) was not further refined and hardly explored in recent works.
Differentiable MI estimation. In contrast, there has been a recent surge on new methods for the estimation of the closely related MI between two random variables. The most prominent examples include unnormalized energy-based variational lower bounds (Poole et al., 2019), the lower bounds developed in Nguyen et al. (2010) using variational characterization of f-divergence, the MINEestimator developed in Belghazi et al. (2018) from the Donsker-Varadhan representation of MI which can be also interpreted as an improvement of the plug-in estimator of Suzuki et al. (2008), the noise-contrastive based bound developed in van den Oord et al. (2018) and finally a contrastive upper bound (Cheng et al., 2020a). McAllester & Stratos (2020) point out shortcomings in other estimation strategies and introduce their own Differences of Entropies (DOE) method.
2 KNIFE
In this section we identify limitations of existing entropy estimators introduced in Schraudolph (2004); McAllester & Stratos (2020). Subsequently, we present KNIFE, which addresses these shortcomings.
2.1 LIMITATIONS OF EXISTING DIFFERENTIAL ENTROPY ESTIMATORS
Consider a continuous random vector X ∼ p in Rd. Our goal is to estimate the DE h(X) := − ∫ p(x) log p(x) dx. Given the intractability of this integral, we will rely on a Monte-Carlo estimate of h(X), using N i.i.d. samples Dx = {xn}Nn=1 to obtain
ĥORACLE(Dx) := − 1
N N∑ n=1 log p(xn). (1)
Unfortunately, assuming access to the true density p is often unrealistic, and we will thus construct an estimate p̂ that can then be plugged into (1) instead of p. If p̂ is smooth, the resulting plug-in estimator of DE is differentiable (R1).
Assuming access to an additional—ideally independent—set of M i.i.d. samples E = {x′m}Mm=1, we build upon the Parzen-Rosenblatt estimator (Rosenblatt, 1956; Parzen, 1962)
p̂(x;w, E) = 1 wdM M∑ m=1 κ ( x− x′m w ) , (2)
where w > 0 denotes the bandwidth and κ is a kernel density. The resulting entropy estimator when replacing p in (1) by (2) was analyzed in Ahmad & Lin (1976). In Schraudolph (2004), this approach was extended using the kernel estimator
p̂SCHRAU.(x; A, E) := 1
M M∑ m=1 κAm(x− x′m), (3)
where A := (A1, . . . , AM ) are (distinct, diagonal) covariance matrices and κA(x) = N (x; 0, A) is a centered Gaussian density with covariance matrix A.
The DOE method of McAllester & Stratos (2020) is a MI estimator that separately estimates a DE and a conditional DE. For DE, a simple Gaussian density estimate p̂DOE(x;θ) = κA(x− µ) is used, where θ = (A,µ) are the training parameters, the diagonal covariance matrix A and the mean µ.
While both SCHRAU. and DOE yield differentiable plug-in estimators for DE, they each have a major disadvantage. The strategy of Schraudolph (2004) fixes the kernel mean values at E , which implies that the method cannot adapt to a shifting input distribution (R3). On the other hand, DOE allows for rapid adaptation, but its simple structure makes it inadequate for the DE estimation of multi-modal densities. We illustrate these limitations in Section 3.1.
2.2 KNIFE ESTIMATOR
In KNIFE, the kernel density estimate is given by
p̂KNIFE(x;θ) := M∑ m=1 umκAm(x− am), (4)
where θ := (A,a,u) and the additional parameters 0 ≤ u = (u1, u2, . . . , uM ) with 1 · u = 1 and a = (a1, . . . , aM ) are introduced. Note that p̂KNIFE(x;θ) is a smooth function of θ, and so is our proposed plug-in estimator
ĥKNIFE(Dx;θ) := − 1
N N∑ n=1 log p̂KNIFE(xn;θ). (5)
KNIFE combines the ideas of Schraudolph (2004); McAllester & Stratos (2020). It is differentiable and able to adapt to shifting input distributions, while capable of matching multi-modal distributions. Thus, as we will see in synthetic experiments, incorporating um and shifts am in the optimization enables the use of KNIFE in non-stationary settings, where the distribution of X evolves over time.
Learning step: Stemming from the observation that, by the Law of Large Numbers (LLN),
ĥKNIFE(Dx,θ) LLN ≈ −E [ log p̂KNIFE(X;θ) ] = h(X) + DKL(p‖p̂KNIFE( · ;θ)) ≥ h(X), (6)
we propose to learn the parameters θ by minimizing ĥKNIFE, where E may be used to initialize a. Although not strictly equivalent due to the Monte-Carlo approximation, minimizing ĥKNIFE can be understood as minimizing the Kullback-Leibler (KL) divergence in (6), effectively minimizing the gap between ĥKNIFE and h(X). In fact, ĥKNIFE can also be interpreted as the standard maximum likelihood objective, widely used in modern machine learning. It is worth to mention that the KNIFE estimator is fully differentiable with respect to θ and the optimization can be tackled by any gradient-based method (e.g., Adam (Kingma & Ba, 2014) or AdamW (Loshchilov & Hutter, 2017)).
2.3 CONVERGENCE ANALYSIS
Note that the classical Parzen-Rosenblatt estimator ĥ(Dx;w), where (2) is plugged into (1), is a special case of KNIFE. Thus, the convergence analysis provided in (Ahmad & Lin, 1976, Theorem 1) also applies and yields sufficient conditions for ĥKNIFE(Dx,θ)→ h(X). In Appendix C, we extend this result and, assuming that the underlying distribution p is compactly supported on X = [0, 1]d and L-Lipschitz continuous, the following theorem is proved. Theorem 1. For any δ > 0, there exists a function ε(N,M,w) such that, with probability at least 1− δ,
∣∣ĥ(Dx;w)−h(X)∣∣ ≤ ε(N,M,w). Additionally, ε(N,M,w)→ 0 as M,N →∞ and w → 0 provided that
Nw → 0 and N 2 logN
w2dM → 0, (7)
where M and N denote the number of samples in E and Dx, respectively.
The precise assumptions for Theorem 1 and an explicit formula for ε(N,M,w) are given in Theorem 2 in Appendix C. For instance, Theorem 1 provides a bound on the speed of convergence for the consistency analysis in (Ahmad & Lin, 1976, Theorem 1).
2.4 ESTIMATING CONDITIONAL DIFFERENTIAL ENTROPY AND MUTUAL INFORMATION
Similar to (McAllester & Stratos, 2020), the proposed DE estimator can be used to estimate other information measures. In particular, we can use KNIFE to construct estimators of conditional DE and MI. When estimating the conditional DE and MI for a pair of random variables (X,Y ) ∼ p, we not only use Dx = {xn}Nn=1, but also the according i.i.d. samples Dy = {yn}Nn=1, where (xn, yn) are drawn according to p.
Conditional Differential Entropy. We estimate conditional DE h(X|Y ) by considering θ to be a parameterized function Θ(y) of y. Then all relations previously established naturally generalize and
p̂KNIFE(x|y; Θ) := p̂KNIFE(x; Θ(y)), ĥKNIFE(Dx|Dy; Θ) := − 1
N N∑ n=1 log p̂KNIFE(xn|yn; Θ). (8)
Naturally, minimization of (6) is now performed over the parameters of Θ. If Y is a continuous random variable, we use an artificial neural network Θ(y), taking y as its input. On the other hand, if Y ∈ Y is a discrete random variable, we have one parameter θ for each y ∈ Y , i.e., Θ = {θy}y∈Y and p̂KNIFE(x|y; Θ) = p̂KNIFE(x; Θ(y)) = p̂KNIFE(x;θy).
Mutual Information. To estimate the MI between random variables X and Y (either discrete or continuous), recall that MI can be written as I(X;Y ) = h(X) − h(X|Y ). Therefore, we use the marginal and conditional DE estimators (5) and (8) to build a KNIFE-based MI estimator
ÎKNIFE(Dx,Dy;θ,Θ) := ĥKNIFE(Dx;θ)− ĥKNIFE(Dx|Dy; Θ). (9)
3 EXPERIMENTS USING SYNTHETIC DATA
3.1 DIFFERENTIAL ENTROPY ESTIMATION
In this section we apply KNIFE for DE estimation, comparing it to (3), the method introduced in Schraudolph (2004), subsequently labeled “SCHRAU.”. It is worth to mention that we did not perform the Expectation Maximization algorithm, as suggested in (Schraudolph, 2004), but instead opted to use the same optimization technique as for KNIFE to facilitate a fair comparison.
3.1.1 GAUSSIAN DISTRIBUTION
As a sanity check, we test KNIFE on multivariate normal data in moderately high dimensions, comparing it to SCHRAU. and DOE, which we trained with the exact same parameters. We performed these experiments with d = 10 and d = 64 dimensional data. KNIFE yielded the lowest bias and variance in both cases, despite DOE being perfectly adapted to matching a multivariate Gaussian distribution. Additional details can be found in Appendix A.1.
In order to use a DE estimation primitive in a machine learning system, it must be able to adapt to a changing input distribution during training (R3). As already pointed out in Section 2.1, this is a severe limitation of SCHRAU., as re-drawing the kernel support E can be either impractical or at the very least requires a complete re-training of the entropy estimator. Whereas in (4), the kernel support a is trainable and it can thus adapt to a change of the input distribution. In order to showcase this ability, we utilize the approach of Cheng et al. (2020a) and successively decrease the entropy, observing how the estimator adapts. We perform this experiment with data of dimension d = 64 and repeatedly multiply the covariance matrix of the training vectors with a factor of a = 12 . The resulting entropy estimation is depicted in Figure 1. It is apparent that SCHRAU. suffers from a varying bias. The bias increases with decreasing variance, as the kernel support is fixed and cannot adapt as the variance of Dx shrinks. DOE is perfectly adapted to a single Gaussian distribution and performs similar to KNIFE.
3.1.2 TRIANGLE MIXTURE
KNIFE is able to cope with distributions that have multiple modes. While (3) is also capable of matching multi-modal distributions, DOE is unable to do so, as it approximates any distribution with a multivariate Gaussian. We illustrate this by matching a mixture of randomly drawn triangle distributions. The resulting estimated PDFs as well as the ground truth when estimating the entropy of a 1-dimensional mixture of triangles with 10 components can be observed in Figure 2 (left). With increasing dimension the difficulty of this estimation rises quickly as in d dimensions, the resulting PDF of independent c-component triangle mixtures has cd modes. To showcase the performance of KNIFE in this challenging task, we ran 10 training runs for DE estimation of 2-component triangle mixtures in 8 dimensions. An example training run is depicted in Figure 2 (right).
3.2 MUTUAL INFORMATION ESTIMATION
Multivariate Gauss We repeat the experiments in (Cheng et al., 2020a), stepping up the MI I(Xd;Y d) between d i.i.d. copies of joint normal random variables (X,Y ) by increasing their correlation coefficient, i.e., (X,Y ) are multivariate Gaussian with correlation coefficient ρi in the i-th epoch. A training run is depicted in the top of Figure 3. As in (Cheng et al., 2020a), we also repeat the experiment, applying a cubic transformation to Y . The estimation of MI between d i.i.d. copies of X and Y 3 can be observed in the middle row of Figure 3. The MI is unaffected by this bijective transformation. In Appendix A.3, the bias and variance are depicted separately.
Sum of Uniformly Distributed Variables In order to test the ability of KNIFE to adapt to distributions substantially different from the Gaussian kernel shape, we apply it in MI estimation of I(Xd;Y d) with uniformly distributed data. To this end, let X and E be centered, uniformly distributed random variables with E[X2] = E[E2] = 1 and define Y = ρiX+ √ 1− ρ2iE in the i-th epoch. One training run with d = 20 is shown in Figure 3 (bottom). Details about the source distribution as well as details of the experiments can be found in Appendix A.3.
4 EXPERIMENTS ON NATURAL DATA
In this section, we benchmark our proposed KNIFE-based MI estimator on three practical applications, spanning textual and visual data. We reproduce and compare our method to the most recent MI estimators including MINE (Belghazi et al., 2018), NWJ (Nguyen et al., 2010), InfoNCE (van den Oord et al., 2018), CLUB (Cheng et al., 2020a), and DOE (McAllester & Stratos, 2020). We do not explicitly include the SMILE estimator Song & Ermon (2019) in our comparison as it has the same gradient as NWJ.
Common notation: In all following applications, we will use Φψ : X → Z to denote an encoder, where X is the raw input space (i.e., texts or images), and Z denotes a lower dimensional continuous feature space. Additionally, we will use Cψ : Z → Y to denote a shallow classifier from the latent space Z to a discrete or continuous target space Y for classification or regression, respectively. We will use ψ to denote the parameters of both models, Φψ and Cψ . CE denotes the cross entropy loss.
4.1 INFORMATION BOTTLENECK FOR LANGUAGE MODEL FINETUNING
IB has recently been applied to fine-tune large-scale pretrained models (Mahabadi et al., 2021) such as BERT (Devlin et al., 2018) and aims at suppressing irrelevant features in order to reduce overfitting.
Problem statement. Given a textual input X ∈ X and a target label Y ∈ Y , the goal is to learn the encoder Φψ and classifier Cψ, such that Φψ(X) retains little information about X , while still producing discriminative features, allowing the prediction of Y . Thus, the loss of interest is:
L = λ · I(Φψ(X);X)︸ ︷︷ ︸ compression term − I(Φψ(X);Y )︸ ︷︷ ︸ downstream term , (10)
where λ controls the trade-off between the downstream and the compression terms.
Setup. Following Mahabadi et al. (2021) (relying on VUB), we work with the VIBERT model, which uses a Gaussian distribution as prior. Φψ is implemented as a stochastic encoder Φψ(X) = Z ∼ N (µψ(X),Σψ(X)). Details on the architecture of µψ and Σψ can be found in Appendix B. The classifier Cψ is composed of dense layers. To minimize L, the second part of the objective (10) is bounded using the variational bound from Barber & Agakov (2003). Since we use a Gaussian prior, h(Z|X) can be expressed in closed form.1 Thus, when using KNIFE, I(X;Z) = h(Z) − h(Z|X) can be estimated by using ĥKNIFE to estimate h(Z). We compare this KNIFE-based MI estimator with aforementioned MI estimators and the variational upper bound (VUB). For completeness, we also compare against a BERT model trained by direct minimization of a CE loss.
We closely follow the protocol of (Mahabadi et al., 2021) and work on the GLUE benchmark (Wang et al., 2018) originally composed of 5 datasets. However, following (Mahabadi et al., 2021), we choose to finetune neither on WNLI (Morgenstern & Ortiz, 2015) nor on CoLA (Warstadt et al., 2019) due to reported flaws in these datasets. The evaluation is carried out on the standard validation splits as the test splits are not available. Following standard practice (Liu et al., 2019; Yang et al., 2019), we report the accuracy and the F1 for MRPC, the accuracy for RTE and the Pearson and Spearman correlation coefficient for STS-B.
Results. Table 1 reports our results on the GLUE benchmark. We observe that KNIFE obtains the best results on all three datasets and the lowest variance on MRPC and STS-B. The use of a Gaussian prior in the stochastic encoder Φψ could explain the observed improvement of KNIFE-based estimation over MI-estimators such as CLUB, InfoNCE, MINE, DOE, or NWJ.
4.2 FAIR TEXTUAL CLASSIFICATION
In fair classification, we would like the model to take its decision without utilizing private information such as gender, age, or race. For this task, MI can be minimized to disentangle the output of the encoder Z and a private label S ∈ S (e.g., gender, age, or race).
1h(Z|X) = 1 2 ln |Σψ(X)|+ d2 ln(2πe), where d is the dimension of X and | · | denotes the determinant.
downstream task
+λ · I(Φψ(X);S)︸ ︷︷ ︸
disentangled
, (11)
where λ controls the trade-off between minimizing MI and CE loss. In this framework, a classifier is said to be fair or to achieve perfect privacy if no statistical information about S can be extracted from Φψ(X) by an adversarial classifier. Overall, a good model should achieve high accuracy on the main task (i.e., prediction of Y ) while removing information about the protected attribute S. This information is measured by training an offline classifier to recover the protected attribute S from Φψ(X).
Setup. We compute the second term of (11) with competing MI estimators, as well as the model from Elazar & Goldberg (2018), which will be referred to as “Adv”, as it utilizes an adversary to recover the private label from the latent representation Z. For KNIFE-based MI estimation, we use two DE estimators (as S is a binary label), following the approach outlined in Section 2.4. All derivations are detailed in Appendix B. We follow the experimental setting from Elazar & Goldberg (2018); Barrett et al. (2019) and use two datasets from the DIAL corpus (Blodgett et al., 2016) (over 50 million tweets) where the protected attribute S is the race and the main labels are sentiment or mention labels. The mention label indicates whether a tweet is conversational or not. We follow the official split using 160 000 tweets for training and two additional sets composed of 10 000 tweets each for development and testing. In all cases, the labels S and Y are binary and balanced, thus a random guess corresponds to 50% accuracy.
Results. Figure 4 gathers results on the fair classification task. The upper dashed lines represent the (private and main) task accuracies when training a model with only the CE loss (case λ = 0 in (11)). This shows that the learned encoding Φψ(X) contains information about the protected attribute, when training is only performed for the main task. On both the sentiment and mention task, we observe that a KNIFE-based estimator can achieve perfect privacy (see Figures 4b and 4d) with nearly no accuracy loss in the main task (see Figures 4a and 4c). The other MI estimators exhibit different behavior. For sentiment labels, most MI estimators fail to reach perfect privacy (CLUB, NWJ, DOE, and Adv) while others (InfoNCE) achieve perfect privacy while degrading the main task accuracy (10% loss on main accuracy). For mention labels, CLUB can also reach perfect privacy with almost no degradation of the accuracy of the main task. Overall, it is worth noting that KNIFE-based MI estimation enables better control of the degree of disentanglement than the reported baselines.
4.3 UNSUPERVISED DOMAIN ADAPTATION
In unsupervised domain adaptation, the goal is to transfer knowledge from the source domain (S) with a potentially large number of labeled examples to a target domain (T ), where only unlabeled examples are available.
Problem Statement. The learner is given access to labeled images from a source domain (xs, y) ∼ (XS , Y ) ∈ XS × Y and unlabeled images from a target domain xt ∼ XT ∈ XT . The goal is to
learn a classification model {Φψ, Cψ} that generalizes well to the target domain. Training models on the supervised source data only results in domain-specific latent representations Φψ(X) leading to poor generalization (when X is chosen randomly from {XS , XT }). In order to make the latent representations as domain-agnostic as possible, we follow the information-theoretic method proposed by Gholami et al. (2020), and used in Cheng et al. (2020a). The idea is to learn an additional binary model {Φdν , Cdν}, whose goal it is to guess the domain D ∈ {0, 1} of X . The latent representation learned by Φdν will therefore contain all the domain-specific information that we would like the main encoder Φψ to discard. In other words, we would like Φψ(X) and Φdν(X) to be completely disentangled, which naturally corresponds to the minimization of I(Φψ(X); Φdν(X)). Concretely, the domain classifier is trained to minimize the CE between domain labels D and its own predictions, whereas the main classifier is trained to properly classify support samples while minimizing the MI between Φψ(X) and Φdν(X). Using f d ν := C d ν ◦ Φdν and fψ := Cψ ◦ Φψ , the objectives are
min ν CE(D; fdν (X)) and min ψ CE(Y ; fψ(XS)) + λ · I(Φψ(X); Φdν(X)). (12)
Setup. The different MI estimators are compared based on their ability to guide training by estimating I(Φψ(X); Φdν(X)) in (12). We follow the setup of Cheng et al. (2020a) as closely as possible, and consider a total of 6 source/target scenarios formed with MNIST (LeCun & Cortes, 2010), MNIST-M (Ganin et al., 2016), SVHN (Netzer et al., 2011), CIFAR-10 (Krizhevsky et al., 2009), and STL-10 (Coates et al., 2011) datasets. We reproduce all methods and allocate the same budget for hyper-parameter tuning to every method. The exhaustive list of hyper-parameters can be found in Appendix B.
Results. Results are presented in Table 2. The KNIFE-based estimator is able to outperform MI estimators in this challenging scenario where both Φψ(X) and Φdν(X) are continuous.
5 CONCLUDING REMARKS
We introduced KNIFE, a fully learnable, differentiable kernel-based estimator of differential entropy, designed for deep learning applications. We constructed a mutual information estimator based on KNIFE and showcased several applications. KNIFE is a general purpose estimator and does not require any special properties of the learning problem. It can thus be incorporated as part of any training objective, where differential entropy or mutual information estimation is desired. In the case of mutual information, one random variable may even be discrete.
Despite the fundamental challenges in the problem of differential entropy estimation, beyond limitations arising from the use of a finite number of samples, KNIFE has demonstrated promising empirical results in various representation learning tasks.
Future work will focus on improving the confidence bounds given in Theorem 1. In particular, tailoring them towards KNIFE using tools from (Birge & Massart, 1995; Singh & Poczos, 2014). Another potential extension is direct estimation of the gradient of entropy, when p̂KNIFE(x;θ) has been learned (Mohamed et al., 2020; Song et al., 2020). This could be applied after the learning phase of KNIFE and is left for future work.
APPENDIX
A EXPERIMENTAL DETAILS OF EXPERIMENTS WITH SYNTHETIC DATA
Implementation of KNIFE in PyTorch (Paszke et al., 2019) is rather straightforward. The constraint on the weights u can be satisfied by applying a softmax transformation. The covariance matrices were parameterized by the lower-triangular factor in the Cholesky decomposition of the precision matrices, guaranteeing the definiteness constraint to be satisfied.
A.1 DIFFERENTIAL ENTROPY ESTIMATION OF GAUSSIAN DATA
In Section 3.1.1, the estimation of the entropy h(X) = d2 log 2πe for X ∼ N (0, Id) was performed with the hyperparameters given in Table 3. The mean error and its empirical standard deviation are reported in Table 5 over 20 runs, where an independently drawn evaluation set with the same size as the training set is used. At d = 10 we have the entropy h = d2 log 2πe = 14.19, while for the higher dimension, d = 64 we find h = 90.81.
In the experiment depicted in Figure 1, entropy is decreased after every epoch by letting Xi ∼ N (0, aiId), where i = 0, . . . , 4 is the epoch index. That is, Xi = √ aiGd, where G is a standard normal random variable, resulting in an decrease of the DE by ∆ = −d2 log a ≈ 22.18 for a = 1 2 with every epoch. We start at h(X0) = d2 log 2πe ≈ 90.81 and successively decrease until h(X4) = h(X0) + 4∆ ≈ 2.1. Additional parameters can be found in Table 4.
Computational Resources. Training was performed on an NVidia V100 GPU. Taken together, training for the first experiments of entropy estimation in dimensions d = 10, 64, as well as the experiment depicted in Figure 1 used GPU time of less than 5 minutes.
A.2 DIFFERENTIAL ENTROPY ESTIMATION OF TRIANGLE MIXTURES
In Section 3.1.2, we perform an estimation of the entropy of c-component triangle mixture distributions. The PDF of such a c-component triangle-mixture, is given by
p(x) = c∑ i=1 wiΛsi ( x− i− 1 2 ) , (13)
where Λs(x) := 1s max{0, 2 − 4s|x|} is a centered triangle PDF with width s > 0. The scales s = (s1, . . . , sc) and weights w = (w1, . . . , wc) satisfy 0 < si, wi < 1 and ∑c i=1 wi = 1. Before the experiment, we choose w uniformly at random from the c-probability simplex and the scales are chosen uniformly at random in [0.1, 1.0]. An example for c = 10 is the true PDF depicted in Figure 2
(left). For d > 1, we perform the estimation on d i.i.d. copies. Note that the triangle mixture with c components in d-dimensional space has cd modes, i.e., the support can be partitioned into cd disjoint components.
The parameters of the experiment yielding Figure 2 (left) are given in Table 6, while the details of the experiment depicted in Figure 2 (right) can be found in Table 7. In the latter experiment, over ten runs, entropy was estimated to an accuracy of 1.6563± 0.8528 by KNIFE, accurate to 2.4445± 0.5439 using (3) and with an accuracy of 7.1070± 2.7984 by DOE. This is the mean absolute error and its empirical standard deviation over all 10 runs, where the evaluation set was drawn independently from the training set and has the same size as the training set.
Computational Resources. Training was performed on an NVidia V100 GPU. Training in d = 1 dimension, that resulted in Figure 2 (left) can be performed in seconds, while all training required for producing Figure 2 (right) used approximately 1.5 hours of GPU time.
A.3 MUTUAL INFORMATION ESTIMATION
In Section 3.2, we estimate I(Xd;Y d) and I(Xd; (Y 3)d) where (X,Y ) are multivariate correlated Gaussian distributions with correlation coefficient ρi in the i-th epoch. Subsequently, we estimate I(Xd;Y d) where X,E ∼ U [− √ 3, √ 3] are independent and Y is given by Y = ρiX + √
1− ρ2iE. In both cases, ρi is chosen such that I(Xd;Y d) = 2i in the i-th epoch.
All neural networks are randomly initialized. The bias, variance, and MSE during training as a function of the MI, can be observed in Figure 5.
The estimation is performed in 10 runs, randomly choosing the training meta-parameters as proposed by McAllester & Stratos (2020). In Figure 3 (bottom), we present the best run for each method, selected by distance from the true MI at the end of training. The bias, variance, and MSE during training, as a function of the MI, can be observed in Figure 6. Details about the source distribution as well as details of the experiments can be found in Table 8. During experimentation it turned out to be
beneficial to train the parameters Θ and θ in (9) separately and substantially increase the learning rate for the training of θ. Thus, we increase the learning rate for the training of θ by a factor of 103.
Model Architecture for Θ. We utilize the feed-forward architecture, also used in McAllester & Stratos (2020). It is a simple architecture with two linear layers, one hidden layer using tanh activation, immediately followed by an output layer. The number of neurons in the hidden layer is a meta-parameter selected randomly from {64, 128, 256} for each training run. Three models with this architecture are used for the three parameters (A,a,u), as described by (4), where only the output dimension is changed to fit the parameter dimension.
Computational Resources. Training was performed, using about 6 hours of GPU time on an NVidia V100 GPU to carry out the experiment depicted in Figure 3 (bottom).
B EXPERIMENTAL DETAILS OF EXPERIMENTS ON NATURAL DATA
B.1 ON THE PARAMETER UPDATE
In Section 4, we rely on two different types of models: pretrained (e.g., fine tuning with VIBERT) and randomly initialized (e.g., in fair classification and domain adaptation). When working with randomly initialized networks the parameters are updated. However, it is worth noting that in the literature the pretrained model parameters (i.e. ψ) are not always updated (see Ravfogel et al. (2020)). In our experiments: (i) We always update the parameters (even for pretrained models), and (ii) we did not change the way the parameters were updated in concurrent works (to ensure fair comparison). Specifically,
• for language model finetuning (Appendix B.2), we followed Mahabadi et al. (2021) and did a joint update;
• for the fair classification task (Appendix B.3), we followed common practice and used the algorithm described in Algorithm 1 which rely on an alternated update;
• for the domain adaptation task (Appendix B.4), we followed common practice and used a joint method.
B.2 INFORMATION BOTTLENECK FOR LANGUAGE MODEL FINETUNING
For this experiment we follow the experimental setting introduced in Mahabadi et al. (2021) and work with the GLUE data2.
Model Architecture. We report in Table 9, the multilayer perceptron (MLP) used to compute the compressed sentence representations produced by BERT. Variance and Mean MLP networks are composed of fully connected layers.
2see https://gluebenchmark.com/faq
Algorithm 1 Disentanglement using a MI-based regularizer 1: INPUT Labelled training set D = {(xj , sj , yj)∀j ∈ [n+ 1, N ]}; independent set of samples E ; θ parameters KNIFE; ψ parameters of network.
2: INITIALIZE parameters θ, ψ 3: OPTIMIZATION 4: while (θ, ψ) not converged do 5: for i ∈ [1,Unroll] do . Learning Step for KNIFE 6: Sample a batch B from E 7: Update θ using ((9)). 8: end for 9: Sample a batch B′ from D
10: Update θ with B′ ((11)). 11: end while 12: OUTPUT Encoder and classifier weights ψ
Table 10: Experimental details on Information Bottleneck.
Parameter Value
Learning Rate See Appendix B.2 Optimizer AdamW Warmup Steps 0.0 Dropout 0.0
Batch Size 32
Model Training. For model training, all models are trained for 6 epochs and we use early stopping (best model is selected on validation set error). For IB, λ is selected in {10−4, 10−5, 10−6} and K is selected in {144, 192, 288, 384}. We follow (Alemi et al., 2016) where the posterior is averaged over 5 samples and a linear annealing schedule is used for λ. Additional hyper-parameters are reported in Table 10.
Dataset Statistics. Table 11 reports the statistics of the dataset used in our finetuning experiment.
Computational Resources. For all these experiments we rely on NVidia-P100 with 16GB of RAM. To complete the full grid-search on 10 seeds and on the three datasets, approximately 1.5k hours are required.
B.3 FAIR TEXTUAL CLASSIFICATION
In this section, we gather the experimental details for the textual fair classification task.
B.3.1 DETAILS OF THE KNIFE-BASED ESTIMATOR
In this experiment, we estimate the MI between a continuous random variable, namely Z = Φψ(X), and a discrete variable, denoted by S ∈ S = {1, 2, . . . , |S|}. We follow the strategy outlined in Section 2.4 for estimating the conditional DE h(Z|S). However, we will reuse the estimate of the conditional PDF p̂(z|s; Θ) to compute an estimate of the DE as
h(Z) ≈ − 1 N N∑ n=1 log (∑ s∈S p̂KNIFE(zn|s; Θ)p̂(s) ) , (14)
where p̂(s) = 1N |{n : sn = s}| is used to indicate the empirical distribution of S in the training set Ds.3 In our experiments, with |S| = 2, we found that estimating the DE h(Z) based on the KNIFE estimator learnt for h(Z|S) increases the stability of training. We adopted the same strategy for DOE.
B.3.2 EXPERIMENTAL DETAILS
Model Architecture. For the encoder, we use a bidirectionnal GRU with two layers with hidden and input dimension set to 128. We use LeakyReLU as the activation function. The classification head is composed of fully connected layers of input dimension 256. We use a learning rate of 0.0001 for AdamW. The dropout rate is set to 0.2. The number of warmup steps is set to 1000.
3As we work with balanced batches, we will have p̂(s) = 1|S| .
Computational Resources. For all these experiments, we rely on NVIDIA-P100 with 16GB of RAM. Each model is trained for 30k steps. The model with the lowest MI is selected. The training of a single network takes around 3 hours.
B.4 UNSUPERVISED DOMAIN ADAPTATION
We follow the experimental setup given in Cheng et al. (2020a) as closely as possible, i.e., we pick hyperparameters given in the paper, or if not provided, those set in the code:4
Model Training. We use Adam optimizer for all modules with a learning rate of 0.001. Batch size is set to 128. We set the weighting parameter λ = 0.1. The original code of Cheng et al. (2020a) uses 15 000 training iterations, but we found most methods had not properly converged at this stage, and hence use 25 000 iterations instead. Similar to other experiments, we set the kernel size M = 128.
Model Architecture. Table 12 summarizes the architectures used for the different modules. For the MI network of each method, the best configuration, based on the validation set of the first task MNIST→MNIST-M, is chosen among 4 configurations: with or without LayerNorm and with ReLU or tanh activation.
Computational Resources. For these experiments, we used a cluster of NVIDIA-V100 with 16GB of RAM. Each training (i.e., 25k iterations) on a single task requires on average 2 hours. Given that we have 6 tasks, and repeat the training for 3 different seeds, on average 36 hours computation time is required for each method.
C BOUNDING THE ERROR
In the following, fix L > 0 and let PL be the set of L-Lipschitz PDFs supported5 on X := [0, 1]d, i.e., ∫ X p(x) dx = 1, and
∀x, y ∈ Rd : |p(x)− p(y)| ≤ L‖x− y‖ (15) for p ∈ PL, where6 ‖x‖ := ∑ k |xk|.
Assume p ∈ PL and let κ be a PDF supported on X . In order to show that estimation of h(X) is achievable, we use a standard Parzen-Rosenblatt estimator p̂(x;w) := 1
Mwd ∑M m=1 κ (x−X′m w ) , as
in (2). The entropy estimate is then defined by the empirical average
ĥ(Dx;w) := − 1
N N∑ n=1 log p̂(Xn;w). (16)
Further, define the following quantities, which are assumed to be finite:
pmax := max{p(x) : x ∈ X}, (17)
C1 :=
∫ p(x) log2 p(x)dx, (18)
C2 := L
∫ ‖u‖κ(u)du, (19)
Kmax := max{κ(x) : x ∈ X}. (20) Note that it is easily seen that pmax ≤ L2 and C1 ≤ max { pmax log 2 pmax, 4e −2} by our assumptions. The requirement C2,Kmax <∞ represents a mild condition on the kernel function κ. We can now show the following.
4https://github.com/Linear95/CLUB/tree/master/MI_DA. 5Any known compact support suffices. An affine transformation then yields X = [0, 1]d, while possibly resulting in a different Lipschitz constant. 6The `1 norm is chosen to facilitate subsequent computations. By the equivalence of norms on Rd, any norm suffices.
Convolution sequence
Noisy downsampling
Theorem 2. With probability greater than 1− δ we have
|h(X)− ĥ(Dx;w)| ≤ − log 1− 3NKmax wdδ √ log 6Nδ 2M − 3NC2w δ +√3C1 Nδ , (21)
if the expression in the logarithm is positive.
In particular, the estimation error approaches zero asN →∞ ifw = w(N)→ 0,M = M(N)→∞ are chosen such that
Nw → 0, (22) N2 logN
w2dM → 0. (23)
We prove Theorem 2 in several Lemmas.
Lemma 3. Fix δ > 0 and x0 ∈ X . Then, with probability greater than 1− δ,
|p(x0)− p̂(x0)| ≤ Kmax wd √ log 2δ 2M + C2w. (24)
Proof. First, we can show that
|E[p̂(x0)]− p(x0)| = ∣∣∣∣∣ 1Mwd M∑ m=1 ∫ κ ( x0 − x w ) p(x)dx− p(x0) ∣∣∣∣∣ (25) = ∣∣∣∣ 1wd ∫ κ ( x0 − x w ) p(x)dx− p(x0)
∣∣∣∣ (26) =
∣∣∣∣∫ κ (u) p(x0 − wu)du− p(x0)∣∣∣∣ (27) =
∣∣∣∣∫ κ (u) [p(x0 − wu)− p(x0)]du∣∣∣∣ (28) ≤ ∫ κ (u) |p(x0 − wu)− p(x0)|du (29)
≤ ∫ κ (u)Lw‖u‖du (30)
= wC2. (31)
Next, note that
|E[p̂(x0)]− p̂(x0)| ≤ Kmax wd √ log 2δ 2M
(32)
holds with probability greater than 1− δ as the requirements of McDiarmid’s inequality (Paninski, 2003, Sec. 3) are satisfied with cj = KmaxMwd and thus P{|E[p̂(x0)]− p̂(x0)| ≥ ε} ≤ δ with
ε = Kmax wd √ log 2δ 2M . (33)
Combining (31) and (32) gives (24).
Lemma 4. For any continuous random variable X supported on X and a ≥ 0, we have
P{p(X) ≤ a} ≤ a. (34)
Proof. We apply Markov’s inequality to the random variable Y = 1p(X) and observe that
P{p(X) ≤ a} = P{Y ≥ a−1} ≤ vol(X )a = a. (35)
Lemma 5. If x > 0, y ≥ a > 0, 0 < a < 1, and |x− y| ≤ δ < a, then
| log x− log y| ≤ log a a− δ
= − log (
1− δ a
) . (36)
Proof. Case x ≥ y. We can write y = a+ b and x = y+ c = a+ b+ c for b ≥ 0 and 0 ≤ c ≤ δ < a.∣∣∣∣log xy ∣∣∣∣ = log(1 + ca+ b ) (37)
≤ log ( 1 + c
a
) ≤ log ( 1 + δ
a
) . (38)
Furthermore,
log
( a
a− δ
) − log ( 1 + δ
a
) = log
1
(a+ δ)(a− δ) (39)
= log 1
a2 − δ2 (40)
≥ log 1 a2 = −2 log a > 0. (41)
Case x < y. Here, we can write y = a+ b and x = y − c = a+ b− c for b ≥ 0 and 0 ≤ c ≤ δ < a.∣∣∣∣log xy ∣∣∣∣ = log yx (42)
= log
( a+ b
a+ b− c
) (43)
≤ log ( a
a− c
) (44)
≤ log ( a
a− δ
) = − log ( 1− δ
a
) . (45)
Proof of Theorem 2. We apply Lemma 3 N times and use the union bound to show that with probability greater than 1− δ3 we have for every n ∈ [N ]
|p(Xn)− p̂(Xn)| ≤ Kmax wd
√ log 6Nδ
2M + C2w. (46)
Similarly, by Lemma 4, we have with probability greater than 1− δ3 that
p(Xn) ≥ δ
3N (47)
for all n ∈ [N ].
Again by the union bound, we have that with probability greater than 1− 2δ3 both (46) and (47) hold for all n ∈ [N ], and thus, by Lemma 5, we obtain∣∣∣∣∣ĥ(Dx;w) + 1N N∑ n=1 log p(Xn) ∣∣∣∣∣ = ∣∣∣∣∣ 1N N∑ n=1 log p(Xn) p̂(Xn)
∣∣∣∣∣ (48) ≤ − log 1− Kmaxwd √ log 6Nδ 2M + C2w δ
3N (49) = − log 1− 3NKmax wdδ √ log 6Nδ 2M − 3NC2w δ , (50)
provided the argument in the logarithm is positive. Finally, we have the upper bound on the variance
E (h(X) + 1 N N∑ n=1 log p(Xn) )2 = 1 N2 N∑ n=1 E[(h(X) + log p(X))2] (51)
= 1
N (E[log2 p(X)]− h(X)2) (52) ≤ 1 N C1 (53)
and apply Chebychev’s inequality, showing that with probability greater than 1− δ3 ,∣∣∣∣∣h(X) + 1N N∑ n=1 log p(Xn) ∣∣∣∣∣ ≤ √ 3C1 Nδ . (54)
The union bound and the triangle inequality applied to (50) and (54) yields the desired result.
D LIBRARIES USED
For our experiments, we built upon code from the following sources.
• VIBERT (Mahabadi et al., 2021) at github.com/rabeehk/vibert. • TRANSFORMERS (Wolf et al., 2019) at github.com/huggingface/transformers. • DOE (McAllester & Stratos, 2020) at github.com/karlstratos/doe. • SMILE (Song & Ermon, 2019) at github.com/ermongroup/smile-mi-estimator. • InfoNCE, MINE, NWJ, CLUB (Cheng et al., 2020a) at github.com/Linear95/CLUB. | 1. What is the focus of the paper regarding information-theoretic quantities in deep learning?
2. What are the requirements specified by the authors for a suitable differential entropy estimator?
3. How does the proposed estimator h^KNIFE differ from other commonly used estimators?
4. What are some concerns regarding the details of the proposed estimator and its implementation?
5. How does the paper demonstrate the effectiveness of the proposed estimator through experiments?
6. Are there any questions or concerns about the relevance of Theorem 1 to the KNIFE estimator? | Summary Of The Paper
Review | Summary Of The Paper
The paper proposes an estimator
h
^
KNIFE
for differential entropy suited for applications in deep learning. A set of desirable requirements is specified all of which are satisfied by
h
^
KNIFE
but not by other commonly used estimators. Extensive experiments are performed to justify the new estimator.
Review
The paper tackles the important problem of estimating commonly used information-theoretic quantities like differential entropy and mutual information. For a random variable
X
with density
p
, differential entropy is
h
(
X
)
:=
−
∫
p
(
x
)
log
p
(
x
)
,
d
x
. Given
n
i.i.d. samples
{
x
i
}
i
=
1
n
of
X
, a naive monte carlo estimate
−
1
n
∑
i
=
1
n
log
p
(
x
i
)
cannot be calculated since
p
is unknown. Therefore, a common solution to this problem is to estimate the density
p
using an i.i.d. sample
{
x
i
′
}
i
=
1
m
of
X
independent of
{
x
i
}
i
=
1
n
. This paper uses a modification of standard kernel density estimators to estimate
p
, and therefore gets
h
^
KNIFE
=
−
1
n
∑
i
=
1
n
log
p
^
KNIFE
(
x
i
;
θ
)
.
It's easier to criticize, so let me do that first. The core of the paper, Section 2, where the proposed estimator is discussed is devoid of necessary details. For example, what is
a
in the proposed estimator for
p
^
KNIFE
? It is unclear how
p
^
KNIFE
is estimated: what data are used (is it a subset of training data to maintain independence?, is it all training data? why?)? In the paragraph "Learning step:" of Section 2.2, why would minimizing
h
^
KNIFE
be desirable? isn't the goal to get
h
^
KNIFE
close to
h
(
X
)
? Although an interesting result, how is Theorem 1 relevant for KNIFE estimator? The estimator
h
^
used in Theorem 1 is not
h
^
KNIFE
. The selling point of the paper is the added flexibility provided by
h
^
KNIFE
over other estimators like
h
^
.
The biggest strength of the paper is the extensive experiments to justify that the proposed estimator is in fact better than other estimators on many tasks. |
ICLR | Title
KNIFE: Kernelized-Neural Differential Entropy Estimation
Abstract
Estimation of (differential) entropy and the related mutual information has been pursued with significant efforts by the machine learning community. To address shortcomings in previously proposed estimators for differential entropy, here we introduce KNIFE, a fully parameterized, differentiable kernel-based estimator of differential entropy. The flexibility of our approach also allows us to construct KNIFE-based estimators for conditional (on either discrete or continuous variables) differential entropy, as well as mutual information. We empirically validate our method on high-dimensional synthetic data and further apply it to guide the training of neural networks for real-world tasks. Our experiments on a large variety of tasks, including visual domain adaptation, textual fair classification, and textual fine-tuning demonstrate the effectiveness of KNIFE-based estimation.
1 INTRODUCTION
Learning tasks requires information (Principe et al., 2006) in the form of training data. Thus, information measures (Shannon, 1948) (e.g. entropy, conditional entropy and mutual information) have been a source of inspiration for the design of learning objectives in modern machine learning (ML) models (Linsker, 1989; Torkkola, 2006). Over the years, a plethora of estimators have been introduced to estimate the value of the aforementioned measures of information and they have been applied to many different problems, including information and coding theory, limiting distributions, model selection, design of experiment and optimal prior distribution, data disclosure, and relative importance of predictors (Ebrahimi et al., 2010). In these applications, traditional research focused on both developing new estimators and obtaining provable guarantees on the asymptotic behavior of these estimators (Liu et al., 2012; Verdú, 2019).
However, when used for training deep neural networks, additional requirements need to be satisfied. In particular, the estimator needs to be differentiable w.r.t. the data distribution (R1), computationally tractable (R2), and rapidly adapt to changes in the underlying distribution (R3). For instance, Mutual Information (MI), a fundamental measure of dependence between variables, only became a popular (standalone or regularizing) learning objective for DNNs once estimators satisfying the above requirements were proposed (Poole et al., 2019; Barber & Agakov, 2003). Although MI is notoriously difficult to estimate in high dimensions (Kraskov et al., 2004; Pichler et al., 2020; McAllester & Stratos, 2020), these estimators have demonstrated promising empirical results in unsupervised representation learning (Krause et al., 2010; Bridle et al., 1992; Hjelm et al., 2019; Tschannen et al., 2020), discrete/invariant representations (Hu et al., 2017; Ji et al., 2019), generative modelling (Chen et al., 2016; Zhao et al., 2017), textual disentangling (Cheng et al., 2020b; Colombo et al., 2021), and applications of the Information Bottleneck (IB) method (Mahabadi et al., 2021; Devlin et al., 2018; Alemi et al., 2016) among others. Compared to MI, Differential Entropy (DE) has received less attention from the ML community while also having interesting applications.
In this paper, we focus on the problem of DE estimation as this quantity naturally appears in many applications (e.g. reinforcement learning (Shyam et al., 2019; Hazan et al., 2019; Ahmed et al., 2019; Kim et al., 2019), IB (Alemi et al., 2016), mode collapse (Belghazi et al., 2018)). Traditional estimators of DE often violate at least one of the requirements (R1) – (R3) listed above (e.g. knearest neighbor based estimators violate (R1)). As a consequence, the absence of DE estimator for arbitrary data distributions forces deep learning researchers to either restrict themselves to special cases where closed-form expressions for DE are available (Shyam et al., 2019) or use MI as a proxy
(Belghazi et al., 2018). In this work, we introduce a Kernelized Neural dIFferential Entropy (KNIFE) estimator, that satisfies the aforementioned requirements and addresses limitations of existing DE estimators (Schraudolph, 2004; McAllester & Stratos, 2020). Stemming from recent theoretical insights (McAllester & Stratos, 2020) that justify the use of DE estimators as building blocks to better estimate MI, we further apply KNIFE to MI estimation. In the context of deep neural networks with high dimensional data (e.g. image, text), KNIFE achieves competitive empirical results in applications where DE or MI is required.
1.1 CONTRIBUTIONS
Our work advances methods in DE and MI estimation in several ways.
1. We showcase limitation of the existing DE estimators proposed in Schraudolph (2004); McAllester & Stratos (2020) with respect to desirable properties required for training deep neural networks. To address these shortcomings, we introduce KNIFE, a fully learnable kernel-based estimator of DE. The flexibility of KNIFE allows us to construct KNIFE-based estimators for conditional DE, conditioning on either a discrete or continuous random variable. 2. We prove learnability under natural conditions on the underlying probability distribution. By requiring a fixed Lipschitz condition and bounded support we are not only able to provide an asymptotic result, but also a confidence bound in the case of a finite training set. This extends the consistency result by Ahmad & Lin (1976). 3. We validate on synthetic datasets (including multi-modal, non-Gaussian distributions), that KNIFE addresses the identified limitations and outperforms existing methods on both DE and MI estimation. In particular, KNIFE more rapidly adapts to changes in the underlying data distribution. 4. We conduct extensive experiments on natural datasets (including text and images) to compare KNIFE-based MI estimators to most recent MI estimators. First, we apply KNIFE in the IB principle to fine-tune a pretrained language model. Using KNIFE, we leverage a closed-form expression of a part of the training objective and achieve the best scores among competing MI estimators. Second, on fair textual classification, the KNIFE-based MI estimator achieves near perfect disentanglement (with respect to the private, discrete label) at virtually no degradation of accuracy in the main task. Lastly, in the challenging scenario of visual domain adaptation, where both variables are continuous, KNIFE-based MI estimation also achieves superior results.
1.2 EXISTENT METHODS AND RELATED WORKS
DE estimation. Existing methods for estimating DE fit into one of three categories (Beirlant et al., 1997; Hlaváčková-Schindler et al., 2007; Verdú, 2019): plug-in estimates (Ahmad & Lin, 1976; Györfi & Van der Meulen, 1987), estimates based on sample-spacings (Tarasenko, 1968), and estimates based on nearest neighbor distances (Kozachenko & Leonenko, 1987; Tsybakov & Van der Meulen, 1996); (Berrett et al., 2019). Our proposed estimator falls into the first category and we will thus focus here on previous work using that methodology. Excellent summaries of all the available methods can be found in the works (Beirlant et al., 1997; Hlaváčková-Schindler et al., 2007; Wang et al., 2009; Verdú, 2019). In Ahmad & Lin (1976), a first nonparametric estimator of DE was suggested and theoretically analyzed. It builds on the idea of kernel density estimation using Parzen-Rosenblatt windowing (Rosenblatt, 1956; Parzen, 1962). More detailed analysis followed (Joe, 1989; Hall & Morton, 1993) but the estimator remained essentially unchanged. Unfortunately, this classical literature is mostly concerned with appropriate regularity conditions that guarantee asymptotic properties of estimators, such as (asymptotic) unbiasedness and consistency. Machine learning applications, however, usually deal with a fixed—often very limited—number of samples.
Differentiable DE estimation. A first estimator that employed a differential learning rule was introduced in Viola et al. (1996). Indeed, the estimator proposed therein is optimized using stochastic optimization, it only used a single kernel with a low number of parameters. An extension that uses a heteroscedastic kernel density estimate, i.e., using different kernels at different positions, has been proposed in Schraudolph (2004). Still the number of parameters was quite low and varying means in the kernels or variable weights were not considered. Although the estimation of DE remained a topic of major interest as illustrated by recent works focusing on special classes of distributions (Kolchinsky & Tracey, 2017; Chaubey & Vu, 2021) and nonparametric estimators (Sricharan et al., 2013; Kandasamy et al., 2015; Moon et al., 2021), the estimator introduced in Schraudolph (2004) was not further refined and hardly explored in recent works.
Differentiable MI estimation. In contrast, there has been a recent surge on new methods for the estimation of the closely related MI between two random variables. The most prominent examples include unnormalized energy-based variational lower bounds (Poole et al., 2019), the lower bounds developed in Nguyen et al. (2010) using variational characterization of f-divergence, the MINEestimator developed in Belghazi et al. (2018) from the Donsker-Varadhan representation of MI which can be also interpreted as an improvement of the plug-in estimator of Suzuki et al. (2008), the noise-contrastive based bound developed in van den Oord et al. (2018) and finally a contrastive upper bound (Cheng et al., 2020a). McAllester & Stratos (2020) point out shortcomings in other estimation strategies and introduce their own Differences of Entropies (DOE) method.
2 KNIFE
In this section we identify limitations of existing entropy estimators introduced in Schraudolph (2004); McAllester & Stratos (2020). Subsequently, we present KNIFE, which addresses these shortcomings.
2.1 LIMITATIONS OF EXISTING DIFFERENTIAL ENTROPY ESTIMATORS
Consider a continuous random vector X ∼ p in Rd. Our goal is to estimate the DE h(X) := − ∫ p(x) log p(x) dx. Given the intractability of this integral, we will rely on a Monte-Carlo estimate of h(X), using N i.i.d. samples Dx = {xn}Nn=1 to obtain
ĥORACLE(Dx) := − 1
N N∑ n=1 log p(xn). (1)
Unfortunately, assuming access to the true density p is often unrealistic, and we will thus construct an estimate p̂ that can then be plugged into (1) instead of p. If p̂ is smooth, the resulting plug-in estimator of DE is differentiable (R1).
Assuming access to an additional—ideally independent—set of M i.i.d. samples E = {x′m}Mm=1, we build upon the Parzen-Rosenblatt estimator (Rosenblatt, 1956; Parzen, 1962)
p̂(x;w, E) = 1 wdM M∑ m=1 κ ( x− x′m w ) , (2)
where w > 0 denotes the bandwidth and κ is a kernel density. The resulting entropy estimator when replacing p in (1) by (2) was analyzed in Ahmad & Lin (1976). In Schraudolph (2004), this approach was extended using the kernel estimator
p̂SCHRAU.(x; A, E) := 1
M M∑ m=1 κAm(x− x′m), (3)
where A := (A1, . . . , AM ) are (distinct, diagonal) covariance matrices and κA(x) = N (x; 0, A) is a centered Gaussian density with covariance matrix A.
The DOE method of McAllester & Stratos (2020) is a MI estimator that separately estimates a DE and a conditional DE. For DE, a simple Gaussian density estimate p̂DOE(x;θ) = κA(x− µ) is used, where θ = (A,µ) are the training parameters, the diagonal covariance matrix A and the mean µ.
While both SCHRAU. and DOE yield differentiable plug-in estimators for DE, they each have a major disadvantage. The strategy of Schraudolph (2004) fixes the kernel mean values at E , which implies that the method cannot adapt to a shifting input distribution (R3). On the other hand, DOE allows for rapid adaptation, but its simple structure makes it inadequate for the DE estimation of multi-modal densities. We illustrate these limitations in Section 3.1.
2.2 KNIFE ESTIMATOR
In KNIFE, the kernel density estimate is given by
p̂KNIFE(x;θ) := M∑ m=1 umκAm(x− am), (4)
where θ := (A,a,u) and the additional parameters 0 ≤ u = (u1, u2, . . . , uM ) with 1 · u = 1 and a = (a1, . . . , aM ) are introduced. Note that p̂KNIFE(x;θ) is a smooth function of θ, and so is our proposed plug-in estimator
ĥKNIFE(Dx;θ) := − 1
N N∑ n=1 log p̂KNIFE(xn;θ). (5)
KNIFE combines the ideas of Schraudolph (2004); McAllester & Stratos (2020). It is differentiable and able to adapt to shifting input distributions, while capable of matching multi-modal distributions. Thus, as we will see in synthetic experiments, incorporating um and shifts am in the optimization enables the use of KNIFE in non-stationary settings, where the distribution of X evolves over time.
Learning step: Stemming from the observation that, by the Law of Large Numbers (LLN),
ĥKNIFE(Dx,θ) LLN ≈ −E [ log p̂KNIFE(X;θ) ] = h(X) + DKL(p‖p̂KNIFE( · ;θ)) ≥ h(X), (6)
we propose to learn the parameters θ by minimizing ĥKNIFE, where E may be used to initialize a. Although not strictly equivalent due to the Monte-Carlo approximation, minimizing ĥKNIFE can be understood as minimizing the Kullback-Leibler (KL) divergence in (6), effectively minimizing the gap between ĥKNIFE and h(X). In fact, ĥKNIFE can also be interpreted as the standard maximum likelihood objective, widely used in modern machine learning. It is worth to mention that the KNIFE estimator is fully differentiable with respect to θ and the optimization can be tackled by any gradient-based method (e.g., Adam (Kingma & Ba, 2014) or AdamW (Loshchilov & Hutter, 2017)).
2.3 CONVERGENCE ANALYSIS
Note that the classical Parzen-Rosenblatt estimator ĥ(Dx;w), where (2) is plugged into (1), is a special case of KNIFE. Thus, the convergence analysis provided in (Ahmad & Lin, 1976, Theorem 1) also applies and yields sufficient conditions for ĥKNIFE(Dx,θ)→ h(X). In Appendix C, we extend this result and, assuming that the underlying distribution p is compactly supported on X = [0, 1]d and L-Lipschitz continuous, the following theorem is proved. Theorem 1. For any δ > 0, there exists a function ε(N,M,w) such that, with probability at least 1− δ,
∣∣ĥ(Dx;w)−h(X)∣∣ ≤ ε(N,M,w). Additionally, ε(N,M,w)→ 0 as M,N →∞ and w → 0 provided that
Nw → 0 and N 2 logN
w2dM → 0, (7)
where M and N denote the number of samples in E and Dx, respectively.
The precise assumptions for Theorem 1 and an explicit formula for ε(N,M,w) are given in Theorem 2 in Appendix C. For instance, Theorem 1 provides a bound on the speed of convergence for the consistency analysis in (Ahmad & Lin, 1976, Theorem 1).
2.4 ESTIMATING CONDITIONAL DIFFERENTIAL ENTROPY AND MUTUAL INFORMATION
Similar to (McAllester & Stratos, 2020), the proposed DE estimator can be used to estimate other information measures. In particular, we can use KNIFE to construct estimators of conditional DE and MI. When estimating the conditional DE and MI for a pair of random variables (X,Y ) ∼ p, we not only use Dx = {xn}Nn=1, but also the according i.i.d. samples Dy = {yn}Nn=1, where (xn, yn) are drawn according to p.
Conditional Differential Entropy. We estimate conditional DE h(X|Y ) by considering θ to be a parameterized function Θ(y) of y. Then all relations previously established naturally generalize and
p̂KNIFE(x|y; Θ) := p̂KNIFE(x; Θ(y)), ĥKNIFE(Dx|Dy; Θ) := − 1
N N∑ n=1 log p̂KNIFE(xn|yn; Θ). (8)
Naturally, minimization of (6) is now performed over the parameters of Θ. If Y is a continuous random variable, we use an artificial neural network Θ(y), taking y as its input. On the other hand, if Y ∈ Y is a discrete random variable, we have one parameter θ for each y ∈ Y , i.e., Θ = {θy}y∈Y and p̂KNIFE(x|y; Θ) = p̂KNIFE(x; Θ(y)) = p̂KNIFE(x;θy).
Mutual Information. To estimate the MI between random variables X and Y (either discrete or continuous), recall that MI can be written as I(X;Y ) = h(X) − h(X|Y ). Therefore, we use the marginal and conditional DE estimators (5) and (8) to build a KNIFE-based MI estimator
ÎKNIFE(Dx,Dy;θ,Θ) := ĥKNIFE(Dx;θ)− ĥKNIFE(Dx|Dy; Θ). (9)
3 EXPERIMENTS USING SYNTHETIC DATA
3.1 DIFFERENTIAL ENTROPY ESTIMATION
In this section we apply KNIFE for DE estimation, comparing it to (3), the method introduced in Schraudolph (2004), subsequently labeled “SCHRAU.”. It is worth to mention that we did not perform the Expectation Maximization algorithm, as suggested in (Schraudolph, 2004), but instead opted to use the same optimization technique as for KNIFE to facilitate a fair comparison.
3.1.1 GAUSSIAN DISTRIBUTION
As a sanity check, we test KNIFE on multivariate normal data in moderately high dimensions, comparing it to SCHRAU. and DOE, which we trained with the exact same parameters. We performed these experiments with d = 10 and d = 64 dimensional data. KNIFE yielded the lowest bias and variance in both cases, despite DOE being perfectly adapted to matching a multivariate Gaussian distribution. Additional details can be found in Appendix A.1.
In order to use a DE estimation primitive in a machine learning system, it must be able to adapt to a changing input distribution during training (R3). As already pointed out in Section 2.1, this is a severe limitation of SCHRAU., as re-drawing the kernel support E can be either impractical or at the very least requires a complete re-training of the entropy estimator. Whereas in (4), the kernel support a is trainable and it can thus adapt to a change of the input distribution. In order to showcase this ability, we utilize the approach of Cheng et al. (2020a) and successively decrease the entropy, observing how the estimator adapts. We perform this experiment with data of dimension d = 64 and repeatedly multiply the covariance matrix of the training vectors with a factor of a = 12 . The resulting entropy estimation is depicted in Figure 1. It is apparent that SCHRAU. suffers from a varying bias. The bias increases with decreasing variance, as the kernel support is fixed and cannot adapt as the variance of Dx shrinks. DOE is perfectly adapted to a single Gaussian distribution and performs similar to KNIFE.
3.1.2 TRIANGLE MIXTURE
KNIFE is able to cope with distributions that have multiple modes. While (3) is also capable of matching multi-modal distributions, DOE is unable to do so, as it approximates any distribution with a multivariate Gaussian. We illustrate this by matching a mixture of randomly drawn triangle distributions. The resulting estimated PDFs as well as the ground truth when estimating the entropy of a 1-dimensional mixture of triangles with 10 components can be observed in Figure 2 (left). With increasing dimension the difficulty of this estimation rises quickly as in d dimensions, the resulting PDF of independent c-component triangle mixtures has cd modes. To showcase the performance of KNIFE in this challenging task, we ran 10 training runs for DE estimation of 2-component triangle mixtures in 8 dimensions. An example training run is depicted in Figure 2 (right).
3.2 MUTUAL INFORMATION ESTIMATION
Multivariate Gauss We repeat the experiments in (Cheng et al., 2020a), stepping up the MI I(Xd;Y d) between d i.i.d. copies of joint normal random variables (X,Y ) by increasing their correlation coefficient, i.e., (X,Y ) are multivariate Gaussian with correlation coefficient ρi in the i-th epoch. A training run is depicted in the top of Figure 3. As in (Cheng et al., 2020a), we also repeat the experiment, applying a cubic transformation to Y . The estimation of MI between d i.i.d. copies of X and Y 3 can be observed in the middle row of Figure 3. The MI is unaffected by this bijective transformation. In Appendix A.3, the bias and variance are depicted separately.
Sum of Uniformly Distributed Variables In order to test the ability of KNIFE to adapt to distributions substantially different from the Gaussian kernel shape, we apply it in MI estimation of I(Xd;Y d) with uniformly distributed data. To this end, let X and E be centered, uniformly distributed random variables with E[X2] = E[E2] = 1 and define Y = ρiX+ √ 1− ρ2iE in the i-th epoch. One training run with d = 20 is shown in Figure 3 (bottom). Details about the source distribution as well as details of the experiments can be found in Appendix A.3.
4 EXPERIMENTS ON NATURAL DATA
In this section, we benchmark our proposed KNIFE-based MI estimator on three practical applications, spanning textual and visual data. We reproduce and compare our method to the most recent MI estimators including MINE (Belghazi et al., 2018), NWJ (Nguyen et al., 2010), InfoNCE (van den Oord et al., 2018), CLUB (Cheng et al., 2020a), and DOE (McAllester & Stratos, 2020). We do not explicitly include the SMILE estimator Song & Ermon (2019) in our comparison as it has the same gradient as NWJ.
Common notation: In all following applications, we will use Φψ : X → Z to denote an encoder, where X is the raw input space (i.e., texts or images), and Z denotes a lower dimensional continuous feature space. Additionally, we will use Cψ : Z → Y to denote a shallow classifier from the latent space Z to a discrete or continuous target space Y for classification or regression, respectively. We will use ψ to denote the parameters of both models, Φψ and Cψ . CE denotes the cross entropy loss.
4.1 INFORMATION BOTTLENECK FOR LANGUAGE MODEL FINETUNING
IB has recently been applied to fine-tune large-scale pretrained models (Mahabadi et al., 2021) such as BERT (Devlin et al., 2018) and aims at suppressing irrelevant features in order to reduce overfitting.
Problem statement. Given a textual input X ∈ X and a target label Y ∈ Y , the goal is to learn the encoder Φψ and classifier Cψ, such that Φψ(X) retains little information about X , while still producing discriminative features, allowing the prediction of Y . Thus, the loss of interest is:
L = λ · I(Φψ(X);X)︸ ︷︷ ︸ compression term − I(Φψ(X);Y )︸ ︷︷ ︸ downstream term , (10)
where λ controls the trade-off between the downstream and the compression terms.
Setup. Following Mahabadi et al. (2021) (relying on VUB), we work with the VIBERT model, which uses a Gaussian distribution as prior. Φψ is implemented as a stochastic encoder Φψ(X) = Z ∼ N (µψ(X),Σψ(X)). Details on the architecture of µψ and Σψ can be found in Appendix B. The classifier Cψ is composed of dense layers. To minimize L, the second part of the objective (10) is bounded using the variational bound from Barber & Agakov (2003). Since we use a Gaussian prior, h(Z|X) can be expressed in closed form.1 Thus, when using KNIFE, I(X;Z) = h(Z) − h(Z|X) can be estimated by using ĥKNIFE to estimate h(Z). We compare this KNIFE-based MI estimator with aforementioned MI estimators and the variational upper bound (VUB). For completeness, we also compare against a BERT model trained by direct minimization of a CE loss.
We closely follow the protocol of (Mahabadi et al., 2021) and work on the GLUE benchmark (Wang et al., 2018) originally composed of 5 datasets. However, following (Mahabadi et al., 2021), we choose to finetune neither on WNLI (Morgenstern & Ortiz, 2015) nor on CoLA (Warstadt et al., 2019) due to reported flaws in these datasets. The evaluation is carried out on the standard validation splits as the test splits are not available. Following standard practice (Liu et al., 2019; Yang et al., 2019), we report the accuracy and the F1 for MRPC, the accuracy for RTE and the Pearson and Spearman correlation coefficient for STS-B.
Results. Table 1 reports our results on the GLUE benchmark. We observe that KNIFE obtains the best results on all three datasets and the lowest variance on MRPC and STS-B. The use of a Gaussian prior in the stochastic encoder Φψ could explain the observed improvement of KNIFE-based estimation over MI-estimators such as CLUB, InfoNCE, MINE, DOE, or NWJ.
4.2 FAIR TEXTUAL CLASSIFICATION
In fair classification, we would like the model to take its decision without utilizing private information such as gender, age, or race. For this task, MI can be minimized to disentangle the output of the encoder Z and a private label S ∈ S (e.g., gender, age, or race).
1h(Z|X) = 1 2 ln |Σψ(X)|+ d2 ln(2πe), where d is the dimension of X and | · | denotes the determinant.
downstream task
+λ · I(Φψ(X);S)︸ ︷︷ ︸
disentangled
, (11)
where λ controls the trade-off between minimizing MI and CE loss. In this framework, a classifier is said to be fair or to achieve perfect privacy if no statistical information about S can be extracted from Φψ(X) by an adversarial classifier. Overall, a good model should achieve high accuracy on the main task (i.e., prediction of Y ) while removing information about the protected attribute S. This information is measured by training an offline classifier to recover the protected attribute S from Φψ(X).
Setup. We compute the second term of (11) with competing MI estimators, as well as the model from Elazar & Goldberg (2018), which will be referred to as “Adv”, as it utilizes an adversary to recover the private label from the latent representation Z. For KNIFE-based MI estimation, we use two DE estimators (as S is a binary label), following the approach outlined in Section 2.4. All derivations are detailed in Appendix B. We follow the experimental setting from Elazar & Goldberg (2018); Barrett et al. (2019) and use two datasets from the DIAL corpus (Blodgett et al., 2016) (over 50 million tweets) where the protected attribute S is the race and the main labels are sentiment or mention labels. The mention label indicates whether a tweet is conversational or not. We follow the official split using 160 000 tweets for training and two additional sets composed of 10 000 tweets each for development and testing. In all cases, the labels S and Y are binary and balanced, thus a random guess corresponds to 50% accuracy.
Results. Figure 4 gathers results on the fair classification task. The upper dashed lines represent the (private and main) task accuracies when training a model with only the CE loss (case λ = 0 in (11)). This shows that the learned encoding Φψ(X) contains information about the protected attribute, when training is only performed for the main task. On both the sentiment and mention task, we observe that a KNIFE-based estimator can achieve perfect privacy (see Figures 4b and 4d) with nearly no accuracy loss in the main task (see Figures 4a and 4c). The other MI estimators exhibit different behavior. For sentiment labels, most MI estimators fail to reach perfect privacy (CLUB, NWJ, DOE, and Adv) while others (InfoNCE) achieve perfect privacy while degrading the main task accuracy (10% loss on main accuracy). For mention labels, CLUB can also reach perfect privacy with almost no degradation of the accuracy of the main task. Overall, it is worth noting that KNIFE-based MI estimation enables better control of the degree of disentanglement than the reported baselines.
4.3 UNSUPERVISED DOMAIN ADAPTATION
In unsupervised domain adaptation, the goal is to transfer knowledge from the source domain (S) with a potentially large number of labeled examples to a target domain (T ), where only unlabeled examples are available.
Problem Statement. The learner is given access to labeled images from a source domain (xs, y) ∼ (XS , Y ) ∈ XS × Y and unlabeled images from a target domain xt ∼ XT ∈ XT . The goal is to
learn a classification model {Φψ, Cψ} that generalizes well to the target domain. Training models on the supervised source data only results in domain-specific latent representations Φψ(X) leading to poor generalization (when X is chosen randomly from {XS , XT }). In order to make the latent representations as domain-agnostic as possible, we follow the information-theoretic method proposed by Gholami et al. (2020), and used in Cheng et al. (2020a). The idea is to learn an additional binary model {Φdν , Cdν}, whose goal it is to guess the domain D ∈ {0, 1} of X . The latent representation learned by Φdν will therefore contain all the domain-specific information that we would like the main encoder Φψ to discard. In other words, we would like Φψ(X) and Φdν(X) to be completely disentangled, which naturally corresponds to the minimization of I(Φψ(X); Φdν(X)). Concretely, the domain classifier is trained to minimize the CE between domain labels D and its own predictions, whereas the main classifier is trained to properly classify support samples while minimizing the MI between Φψ(X) and Φdν(X). Using f d ν := C d ν ◦ Φdν and fψ := Cψ ◦ Φψ , the objectives are
min ν CE(D; fdν (X)) and min ψ CE(Y ; fψ(XS)) + λ · I(Φψ(X); Φdν(X)). (12)
Setup. The different MI estimators are compared based on their ability to guide training by estimating I(Φψ(X); Φdν(X)) in (12). We follow the setup of Cheng et al. (2020a) as closely as possible, and consider a total of 6 source/target scenarios formed with MNIST (LeCun & Cortes, 2010), MNIST-M (Ganin et al., 2016), SVHN (Netzer et al., 2011), CIFAR-10 (Krizhevsky et al., 2009), and STL-10 (Coates et al., 2011) datasets. We reproduce all methods and allocate the same budget for hyper-parameter tuning to every method. The exhaustive list of hyper-parameters can be found in Appendix B.
Results. Results are presented in Table 2. The KNIFE-based estimator is able to outperform MI estimators in this challenging scenario where both Φψ(X) and Φdν(X) are continuous.
5 CONCLUDING REMARKS
We introduced KNIFE, a fully learnable, differentiable kernel-based estimator of differential entropy, designed for deep learning applications. We constructed a mutual information estimator based on KNIFE and showcased several applications. KNIFE is a general purpose estimator and does not require any special properties of the learning problem. It can thus be incorporated as part of any training objective, where differential entropy or mutual information estimation is desired. In the case of mutual information, one random variable may even be discrete.
Despite the fundamental challenges in the problem of differential entropy estimation, beyond limitations arising from the use of a finite number of samples, KNIFE has demonstrated promising empirical results in various representation learning tasks.
Future work will focus on improving the confidence bounds given in Theorem 1. In particular, tailoring them towards KNIFE using tools from (Birge & Massart, 1995; Singh & Poczos, 2014). Another potential extension is direct estimation of the gradient of entropy, when p̂KNIFE(x;θ) has been learned (Mohamed et al., 2020; Song et al., 2020). This could be applied after the learning phase of KNIFE and is left for future work.
APPENDIX
A EXPERIMENTAL DETAILS OF EXPERIMENTS WITH SYNTHETIC DATA
Implementation of KNIFE in PyTorch (Paszke et al., 2019) is rather straightforward. The constraint on the weights u can be satisfied by applying a softmax transformation. The covariance matrices were parameterized by the lower-triangular factor in the Cholesky decomposition of the precision matrices, guaranteeing the definiteness constraint to be satisfied.
A.1 DIFFERENTIAL ENTROPY ESTIMATION OF GAUSSIAN DATA
In Section 3.1.1, the estimation of the entropy h(X) = d2 log 2πe for X ∼ N (0, Id) was performed with the hyperparameters given in Table 3. The mean error and its empirical standard deviation are reported in Table 5 over 20 runs, where an independently drawn evaluation set with the same size as the training set is used. At d = 10 we have the entropy h = d2 log 2πe = 14.19, while for the higher dimension, d = 64 we find h = 90.81.
In the experiment depicted in Figure 1, entropy is decreased after every epoch by letting Xi ∼ N (0, aiId), where i = 0, . . . , 4 is the epoch index. That is, Xi = √ aiGd, where G is a standard normal random variable, resulting in an decrease of the DE by ∆ = −d2 log a ≈ 22.18 for a = 1 2 with every epoch. We start at h(X0) = d2 log 2πe ≈ 90.81 and successively decrease until h(X4) = h(X0) + 4∆ ≈ 2.1. Additional parameters can be found in Table 4.
Computational Resources. Training was performed on an NVidia V100 GPU. Taken together, training for the first experiments of entropy estimation in dimensions d = 10, 64, as well as the experiment depicted in Figure 1 used GPU time of less than 5 minutes.
A.2 DIFFERENTIAL ENTROPY ESTIMATION OF TRIANGLE MIXTURES
In Section 3.1.2, we perform an estimation of the entropy of c-component triangle mixture distributions. The PDF of such a c-component triangle-mixture, is given by
p(x) = c∑ i=1 wiΛsi ( x− i− 1 2 ) , (13)
where Λs(x) := 1s max{0, 2 − 4s|x|} is a centered triangle PDF with width s > 0. The scales s = (s1, . . . , sc) and weights w = (w1, . . . , wc) satisfy 0 < si, wi < 1 and ∑c i=1 wi = 1. Before the experiment, we choose w uniformly at random from the c-probability simplex and the scales are chosen uniformly at random in [0.1, 1.0]. An example for c = 10 is the true PDF depicted in Figure 2
(left). For d > 1, we perform the estimation on d i.i.d. copies. Note that the triangle mixture with c components in d-dimensional space has cd modes, i.e., the support can be partitioned into cd disjoint components.
The parameters of the experiment yielding Figure 2 (left) are given in Table 6, while the details of the experiment depicted in Figure 2 (right) can be found in Table 7. In the latter experiment, over ten runs, entropy was estimated to an accuracy of 1.6563± 0.8528 by KNIFE, accurate to 2.4445± 0.5439 using (3) and with an accuracy of 7.1070± 2.7984 by DOE. This is the mean absolute error and its empirical standard deviation over all 10 runs, where the evaluation set was drawn independently from the training set and has the same size as the training set.
Computational Resources. Training was performed on an NVidia V100 GPU. Training in d = 1 dimension, that resulted in Figure 2 (left) can be performed in seconds, while all training required for producing Figure 2 (right) used approximately 1.5 hours of GPU time.
A.3 MUTUAL INFORMATION ESTIMATION
In Section 3.2, we estimate I(Xd;Y d) and I(Xd; (Y 3)d) where (X,Y ) are multivariate correlated Gaussian distributions with correlation coefficient ρi in the i-th epoch. Subsequently, we estimate I(Xd;Y d) where X,E ∼ U [− √ 3, √ 3] are independent and Y is given by Y = ρiX + √
1− ρ2iE. In both cases, ρi is chosen such that I(Xd;Y d) = 2i in the i-th epoch.
All neural networks are randomly initialized. The bias, variance, and MSE during training as a function of the MI, can be observed in Figure 5.
The estimation is performed in 10 runs, randomly choosing the training meta-parameters as proposed by McAllester & Stratos (2020). In Figure 3 (bottom), we present the best run for each method, selected by distance from the true MI at the end of training. The bias, variance, and MSE during training, as a function of the MI, can be observed in Figure 6. Details about the source distribution as well as details of the experiments can be found in Table 8. During experimentation it turned out to be
beneficial to train the parameters Θ and θ in (9) separately and substantially increase the learning rate for the training of θ. Thus, we increase the learning rate for the training of θ by a factor of 103.
Model Architecture for Θ. We utilize the feed-forward architecture, also used in McAllester & Stratos (2020). It is a simple architecture with two linear layers, one hidden layer using tanh activation, immediately followed by an output layer. The number of neurons in the hidden layer is a meta-parameter selected randomly from {64, 128, 256} for each training run. Three models with this architecture are used for the three parameters (A,a,u), as described by (4), where only the output dimension is changed to fit the parameter dimension.
Computational Resources. Training was performed, using about 6 hours of GPU time on an NVidia V100 GPU to carry out the experiment depicted in Figure 3 (bottom).
B EXPERIMENTAL DETAILS OF EXPERIMENTS ON NATURAL DATA
B.1 ON THE PARAMETER UPDATE
In Section 4, we rely on two different types of models: pretrained (e.g., fine tuning with VIBERT) and randomly initialized (e.g., in fair classification and domain adaptation). When working with randomly initialized networks the parameters are updated. However, it is worth noting that in the literature the pretrained model parameters (i.e. ψ) are not always updated (see Ravfogel et al. (2020)). In our experiments: (i) We always update the parameters (even for pretrained models), and (ii) we did not change the way the parameters were updated in concurrent works (to ensure fair comparison). Specifically,
• for language model finetuning (Appendix B.2), we followed Mahabadi et al. (2021) and did a joint update;
• for the fair classification task (Appendix B.3), we followed common practice and used the algorithm described in Algorithm 1 which rely on an alternated update;
• for the domain adaptation task (Appendix B.4), we followed common practice and used a joint method.
B.2 INFORMATION BOTTLENECK FOR LANGUAGE MODEL FINETUNING
For this experiment we follow the experimental setting introduced in Mahabadi et al. (2021) and work with the GLUE data2.
Model Architecture. We report in Table 9, the multilayer perceptron (MLP) used to compute the compressed sentence representations produced by BERT. Variance and Mean MLP networks are composed of fully connected layers.
2see https://gluebenchmark.com/faq
Algorithm 1 Disentanglement using a MI-based regularizer 1: INPUT Labelled training set D = {(xj , sj , yj)∀j ∈ [n+ 1, N ]}; independent set of samples E ; θ parameters KNIFE; ψ parameters of network.
2: INITIALIZE parameters θ, ψ 3: OPTIMIZATION 4: while (θ, ψ) not converged do 5: for i ∈ [1,Unroll] do . Learning Step for KNIFE 6: Sample a batch B from E 7: Update θ using ((9)). 8: end for 9: Sample a batch B′ from D
10: Update θ with B′ ((11)). 11: end while 12: OUTPUT Encoder and classifier weights ψ
Table 10: Experimental details on Information Bottleneck.
Parameter Value
Learning Rate See Appendix B.2 Optimizer AdamW Warmup Steps 0.0 Dropout 0.0
Batch Size 32
Model Training. For model training, all models are trained for 6 epochs and we use early stopping (best model is selected on validation set error). For IB, λ is selected in {10−4, 10−5, 10−6} and K is selected in {144, 192, 288, 384}. We follow (Alemi et al., 2016) where the posterior is averaged over 5 samples and a linear annealing schedule is used for λ. Additional hyper-parameters are reported in Table 10.
Dataset Statistics. Table 11 reports the statistics of the dataset used in our finetuning experiment.
Computational Resources. For all these experiments we rely on NVidia-P100 with 16GB of RAM. To complete the full grid-search on 10 seeds and on the three datasets, approximately 1.5k hours are required.
B.3 FAIR TEXTUAL CLASSIFICATION
In this section, we gather the experimental details for the textual fair classification task.
B.3.1 DETAILS OF THE KNIFE-BASED ESTIMATOR
In this experiment, we estimate the MI between a continuous random variable, namely Z = Φψ(X), and a discrete variable, denoted by S ∈ S = {1, 2, . . . , |S|}. We follow the strategy outlined in Section 2.4 for estimating the conditional DE h(Z|S). However, we will reuse the estimate of the conditional PDF p̂(z|s; Θ) to compute an estimate of the DE as
h(Z) ≈ − 1 N N∑ n=1 log (∑ s∈S p̂KNIFE(zn|s; Θ)p̂(s) ) , (14)
where p̂(s) = 1N |{n : sn = s}| is used to indicate the empirical distribution of S in the training set Ds.3 In our experiments, with |S| = 2, we found that estimating the DE h(Z) based on the KNIFE estimator learnt for h(Z|S) increases the stability of training. We adopted the same strategy for DOE.
B.3.2 EXPERIMENTAL DETAILS
Model Architecture. For the encoder, we use a bidirectionnal GRU with two layers with hidden and input dimension set to 128. We use LeakyReLU as the activation function. The classification head is composed of fully connected layers of input dimension 256. We use a learning rate of 0.0001 for AdamW. The dropout rate is set to 0.2. The number of warmup steps is set to 1000.
3As we work with balanced batches, we will have p̂(s) = 1|S| .
Computational Resources. For all these experiments, we rely on NVIDIA-P100 with 16GB of RAM. Each model is trained for 30k steps. The model with the lowest MI is selected. The training of a single network takes around 3 hours.
B.4 UNSUPERVISED DOMAIN ADAPTATION
We follow the experimental setup given in Cheng et al. (2020a) as closely as possible, i.e., we pick hyperparameters given in the paper, or if not provided, those set in the code:4
Model Training. We use Adam optimizer for all modules with a learning rate of 0.001. Batch size is set to 128. We set the weighting parameter λ = 0.1. The original code of Cheng et al. (2020a) uses 15 000 training iterations, but we found most methods had not properly converged at this stage, and hence use 25 000 iterations instead. Similar to other experiments, we set the kernel size M = 128.
Model Architecture. Table 12 summarizes the architectures used for the different modules. For the MI network of each method, the best configuration, based on the validation set of the first task MNIST→MNIST-M, is chosen among 4 configurations: with or without LayerNorm and with ReLU or tanh activation.
Computational Resources. For these experiments, we used a cluster of NVIDIA-V100 with 16GB of RAM. Each training (i.e., 25k iterations) on a single task requires on average 2 hours. Given that we have 6 tasks, and repeat the training for 3 different seeds, on average 36 hours computation time is required for each method.
C BOUNDING THE ERROR
In the following, fix L > 0 and let PL be the set of L-Lipschitz PDFs supported5 on X := [0, 1]d, i.e., ∫ X p(x) dx = 1, and
∀x, y ∈ Rd : |p(x)− p(y)| ≤ L‖x− y‖ (15) for p ∈ PL, where6 ‖x‖ := ∑ k |xk|.
Assume p ∈ PL and let κ be a PDF supported on X . In order to show that estimation of h(X) is achievable, we use a standard Parzen-Rosenblatt estimator p̂(x;w) := 1
Mwd ∑M m=1 κ (x−X′m w ) , as
in (2). The entropy estimate is then defined by the empirical average
ĥ(Dx;w) := − 1
N N∑ n=1 log p̂(Xn;w). (16)
Further, define the following quantities, which are assumed to be finite:
pmax := max{p(x) : x ∈ X}, (17)
C1 :=
∫ p(x) log2 p(x)dx, (18)
C2 := L
∫ ‖u‖κ(u)du, (19)
Kmax := max{κ(x) : x ∈ X}. (20) Note that it is easily seen that pmax ≤ L2 and C1 ≤ max { pmax log 2 pmax, 4e −2} by our assumptions. The requirement C2,Kmax <∞ represents a mild condition on the kernel function κ. We can now show the following.
4https://github.com/Linear95/CLUB/tree/master/MI_DA. 5Any known compact support suffices. An affine transformation then yields X = [0, 1]d, while possibly resulting in a different Lipschitz constant. 6The `1 norm is chosen to facilitate subsequent computations. By the equivalence of norms on Rd, any norm suffices.
Convolution sequence
Noisy downsampling
Theorem 2. With probability greater than 1− δ we have
|h(X)− ĥ(Dx;w)| ≤ − log 1− 3NKmax wdδ √ log 6Nδ 2M − 3NC2w δ +√3C1 Nδ , (21)
if the expression in the logarithm is positive.
In particular, the estimation error approaches zero asN →∞ ifw = w(N)→ 0,M = M(N)→∞ are chosen such that
Nw → 0, (22) N2 logN
w2dM → 0. (23)
We prove Theorem 2 in several Lemmas.
Lemma 3. Fix δ > 0 and x0 ∈ X . Then, with probability greater than 1− δ,
|p(x0)− p̂(x0)| ≤ Kmax wd √ log 2δ 2M + C2w. (24)
Proof. First, we can show that
|E[p̂(x0)]− p(x0)| = ∣∣∣∣∣ 1Mwd M∑ m=1 ∫ κ ( x0 − x w ) p(x)dx− p(x0) ∣∣∣∣∣ (25) = ∣∣∣∣ 1wd ∫ κ ( x0 − x w ) p(x)dx− p(x0)
∣∣∣∣ (26) =
∣∣∣∣∫ κ (u) p(x0 − wu)du− p(x0)∣∣∣∣ (27) =
∣∣∣∣∫ κ (u) [p(x0 − wu)− p(x0)]du∣∣∣∣ (28) ≤ ∫ κ (u) |p(x0 − wu)− p(x0)|du (29)
≤ ∫ κ (u)Lw‖u‖du (30)
= wC2. (31)
Next, note that
|E[p̂(x0)]− p̂(x0)| ≤ Kmax wd √ log 2δ 2M
(32)
holds with probability greater than 1− δ as the requirements of McDiarmid’s inequality (Paninski, 2003, Sec. 3) are satisfied with cj = KmaxMwd and thus P{|E[p̂(x0)]− p̂(x0)| ≥ ε} ≤ δ with
ε = Kmax wd √ log 2δ 2M . (33)
Combining (31) and (32) gives (24).
Lemma 4. For any continuous random variable X supported on X and a ≥ 0, we have
P{p(X) ≤ a} ≤ a. (34)
Proof. We apply Markov’s inequality to the random variable Y = 1p(X) and observe that
P{p(X) ≤ a} = P{Y ≥ a−1} ≤ vol(X )a = a. (35)
Lemma 5. If x > 0, y ≥ a > 0, 0 < a < 1, and |x− y| ≤ δ < a, then
| log x− log y| ≤ log a a− δ
= − log (
1− δ a
) . (36)
Proof. Case x ≥ y. We can write y = a+ b and x = y+ c = a+ b+ c for b ≥ 0 and 0 ≤ c ≤ δ < a.∣∣∣∣log xy ∣∣∣∣ = log(1 + ca+ b ) (37)
≤ log ( 1 + c
a
) ≤ log ( 1 + δ
a
) . (38)
Furthermore,
log
( a
a− δ
) − log ( 1 + δ
a
) = log
1
(a+ δ)(a− δ) (39)
= log 1
a2 − δ2 (40)
≥ log 1 a2 = −2 log a > 0. (41)
Case x < y. Here, we can write y = a+ b and x = y − c = a+ b− c for b ≥ 0 and 0 ≤ c ≤ δ < a.∣∣∣∣log xy ∣∣∣∣ = log yx (42)
= log
( a+ b
a+ b− c
) (43)
≤ log ( a
a− c
) (44)
≤ log ( a
a− δ
) = − log ( 1− δ
a
) . (45)
Proof of Theorem 2. We apply Lemma 3 N times and use the union bound to show that with probability greater than 1− δ3 we have for every n ∈ [N ]
|p(Xn)− p̂(Xn)| ≤ Kmax wd
√ log 6Nδ
2M + C2w. (46)
Similarly, by Lemma 4, we have with probability greater than 1− δ3 that
p(Xn) ≥ δ
3N (47)
for all n ∈ [N ].
Again by the union bound, we have that with probability greater than 1− 2δ3 both (46) and (47) hold for all n ∈ [N ], and thus, by Lemma 5, we obtain∣∣∣∣∣ĥ(Dx;w) + 1N N∑ n=1 log p(Xn) ∣∣∣∣∣ = ∣∣∣∣∣ 1N N∑ n=1 log p(Xn) p̂(Xn)
∣∣∣∣∣ (48) ≤ − log 1− Kmaxwd √ log 6Nδ 2M + C2w δ
3N (49) = − log 1− 3NKmax wdδ √ log 6Nδ 2M − 3NC2w δ , (50)
provided the argument in the logarithm is positive. Finally, we have the upper bound on the variance
E (h(X) + 1 N N∑ n=1 log p(Xn) )2 = 1 N2 N∑ n=1 E[(h(X) + log p(X))2] (51)
= 1
N (E[log2 p(X)]− h(X)2) (52) ≤ 1 N C1 (53)
and apply Chebychev’s inequality, showing that with probability greater than 1− δ3 ,∣∣∣∣∣h(X) + 1N N∑ n=1 log p(Xn) ∣∣∣∣∣ ≤ √ 3C1 Nδ . (54)
The union bound and the triangle inequality applied to (50) and (54) yields the desired result.
D LIBRARIES USED
For our experiments, we built upon code from the following sources.
• VIBERT (Mahabadi et al., 2021) at github.com/rabeehk/vibert. • TRANSFORMERS (Wolf et al., 2019) at github.com/huggingface/transformers. • DOE (McAllester & Stratos, 2020) at github.com/karlstratos/doe. • SMILE (Song & Ermon, 2019) at github.com/ermongroup/smile-mi-estimator. • InfoNCE, MINE, NWJ, CLUB (Cheng et al., 2020a) at github.com/Linear95/CLUB. | 1. What is the main contribution of the paper in terms of its approach to estimating differential entropy?
2. What are the strengths and weaknesses of the proposed estimator, particularly in comparison to other existing methods?
3. How does the reviewer assess the theoretical analysis provided in the paper?
4. Are there any concerns or suggestions regarding the empirical experiments presented in the paper?
5. How does the reviewer evaluate the overall completeness and significance of the paper's content? | Summary Of The Paper
Review | Summary Of The Paper
This paper provides a new approach to estimating differential entropy called KNIFE that is also applied to mutual information estimation. The authors define their estimator using a parametric model based on estimating a KDE. The authors provide some theoretical analyses of the estimator and multiple empirical experiments where the proposed estimator outperforms several other estimators.
Review
While the empirical results are promising, ultimately this paper is incomplete. First, the authors completely ignore the many estimators of information theoretic measures that have good empirical results and strong theoretical results. These include [R1-R5]. The authors should include these in the discussion of prior work at the very least and compare the empirical results to the ones that are also differentiable.
Second, the theoretical results are very weak and may even be wrong. Minimax estimation results for estimating entropy, divergence, and mutual information have been established [R1,R2,R6]. All of these results show that lower bounds on the estimation accuracy depend on the dimension of the data as well as the smoothness of the densities. The authors' convergence rate results do take into account the density smoothness but appear to be independent of the data dimension. The authors should resolve this apparent discrepancy.
Other comments: In Section 1.2 the authors state: "In machine learning applications, however, the use of asymptotic results is not realistically justified." I disagree with this. Asymptotic results often relate to finite sample results, i.e. many estimators with good asymptotic theory often have better empirical results as well.
While it's nice that KNIFE adapts to new data, it seems that most nonparametric estimators would automatically adapt. Thus the lack of adaptability seems to be more a problem with parametric estimators and wouldn't be an issue with the estimators given in [R1-R5].
I'm somewhat skeptical about minimizing the LHS of (6). While the explanation provided makes intuitive sense, in practice, due to finite samples, it does seem like underestimating the entropy could still happen. Perhaps some kind of concentration inequality could be used to obtain a more accurate bound instead of using the LLN?
[R1] Moon et al, "Ensemble estimation of generalized mutual information with applications to genomics," IEEE Trans. on IT, 2021. [R2] Kandasamy et al., "Nonparametric von Mises Estimators for Entropies, Divergences and Mutual Informations," NeurIPS, 2015. [R3] Singh and Poczos, "Exponential Concentration of a Density Functional Estimator," NeurIPS, 2014. [R4] Sricharan et al., "Ensemble Estimators for Multivariate Entropy Estimation," IEEE Trans. on IT, 2013. [R5] Berrett et al, "Efficient multivariate entropy estimation via k-nearest neighbour distances," Annals of Statistics, 2019. [R6] Birge and Massart, "Estimation of Integral Functionals of a Density," Annals of Statistics, 1995.
Post-rebuttal update: I have read the authors' response to my review and the other reviewers. I appreciate the revisions the authors have made up to this point, especially regarding the theoretical results. However, I do believe that comparisons to other estimators should be done before I can recommend publication. In their comment, the authors claim that some of these estimators are not well-suited for the proposed use-cases. However, many of these estimators are based on plug-in approaches similar to the KNIFE estimator. Thus I believe that they can be compared to as well.
I have thus raised my score to be marginal, leaning towards reject. |
ICLR | Title
KNIFE: Kernelized-Neural Differential Entropy Estimation
Abstract
Estimation of (differential) entropy and the related mutual information has been pursued with significant efforts by the machine learning community. To address shortcomings in previously proposed estimators for differential entropy, here we introduce KNIFE, a fully parameterized, differentiable kernel-based estimator of differential entropy. The flexibility of our approach also allows us to construct KNIFE-based estimators for conditional (on either discrete or continuous variables) differential entropy, as well as mutual information. We empirically validate our method on high-dimensional synthetic data and further apply it to guide the training of neural networks for real-world tasks. Our experiments on a large variety of tasks, including visual domain adaptation, textual fair classification, and textual fine-tuning demonstrate the effectiveness of KNIFE-based estimation.
1 INTRODUCTION
Learning tasks requires information (Principe et al., 2006) in the form of training data. Thus, information measures (Shannon, 1948) (e.g. entropy, conditional entropy and mutual information) have been a source of inspiration for the design of learning objectives in modern machine learning (ML) models (Linsker, 1989; Torkkola, 2006). Over the years, a plethora of estimators have been introduced to estimate the value of the aforementioned measures of information and they have been applied to many different problems, including information and coding theory, limiting distributions, model selection, design of experiment and optimal prior distribution, data disclosure, and relative importance of predictors (Ebrahimi et al., 2010). In these applications, traditional research focused on both developing new estimators and obtaining provable guarantees on the asymptotic behavior of these estimators (Liu et al., 2012; Verdú, 2019).
However, when used for training deep neural networks, additional requirements need to be satisfied. In particular, the estimator needs to be differentiable w.r.t. the data distribution (R1), computationally tractable (R2), and rapidly adapt to changes in the underlying distribution (R3). For instance, Mutual Information (MI), a fundamental measure of dependence between variables, only became a popular (standalone or regularizing) learning objective for DNNs once estimators satisfying the above requirements were proposed (Poole et al., 2019; Barber & Agakov, 2003). Although MI is notoriously difficult to estimate in high dimensions (Kraskov et al., 2004; Pichler et al., 2020; McAllester & Stratos, 2020), these estimators have demonstrated promising empirical results in unsupervised representation learning (Krause et al., 2010; Bridle et al., 1992; Hjelm et al., 2019; Tschannen et al., 2020), discrete/invariant representations (Hu et al., 2017; Ji et al., 2019), generative modelling (Chen et al., 2016; Zhao et al., 2017), textual disentangling (Cheng et al., 2020b; Colombo et al., 2021), and applications of the Information Bottleneck (IB) method (Mahabadi et al., 2021; Devlin et al., 2018; Alemi et al., 2016) among others. Compared to MI, Differential Entropy (DE) has received less attention from the ML community while also having interesting applications.
In this paper, we focus on the problem of DE estimation as this quantity naturally appears in many applications (e.g. reinforcement learning (Shyam et al., 2019; Hazan et al., 2019; Ahmed et al., 2019; Kim et al., 2019), IB (Alemi et al., 2016), mode collapse (Belghazi et al., 2018)). Traditional estimators of DE often violate at least one of the requirements (R1) – (R3) listed above (e.g. knearest neighbor based estimators violate (R1)). As a consequence, the absence of DE estimator for arbitrary data distributions forces deep learning researchers to either restrict themselves to special cases where closed-form expressions for DE are available (Shyam et al., 2019) or use MI as a proxy
(Belghazi et al., 2018). In this work, we introduce a Kernelized Neural dIFferential Entropy (KNIFE) estimator, that satisfies the aforementioned requirements and addresses limitations of existing DE estimators (Schraudolph, 2004; McAllester & Stratos, 2020). Stemming from recent theoretical insights (McAllester & Stratos, 2020) that justify the use of DE estimators as building blocks to better estimate MI, we further apply KNIFE to MI estimation. In the context of deep neural networks with high dimensional data (e.g. image, text), KNIFE achieves competitive empirical results in applications where DE or MI is required.
1.1 CONTRIBUTIONS
Our work advances methods in DE and MI estimation in several ways.
1. We showcase limitation of the existing DE estimators proposed in Schraudolph (2004); McAllester & Stratos (2020) with respect to desirable properties required for training deep neural networks. To address these shortcomings, we introduce KNIFE, a fully learnable kernel-based estimator of DE. The flexibility of KNIFE allows us to construct KNIFE-based estimators for conditional DE, conditioning on either a discrete or continuous random variable. 2. We prove learnability under natural conditions on the underlying probability distribution. By requiring a fixed Lipschitz condition and bounded support we are not only able to provide an asymptotic result, but also a confidence bound in the case of a finite training set. This extends the consistency result by Ahmad & Lin (1976). 3. We validate on synthetic datasets (including multi-modal, non-Gaussian distributions), that KNIFE addresses the identified limitations and outperforms existing methods on both DE and MI estimation. In particular, KNIFE more rapidly adapts to changes in the underlying data distribution. 4. We conduct extensive experiments on natural datasets (including text and images) to compare KNIFE-based MI estimators to most recent MI estimators. First, we apply KNIFE in the IB principle to fine-tune a pretrained language model. Using KNIFE, we leverage a closed-form expression of a part of the training objective and achieve the best scores among competing MI estimators. Second, on fair textual classification, the KNIFE-based MI estimator achieves near perfect disentanglement (with respect to the private, discrete label) at virtually no degradation of accuracy in the main task. Lastly, in the challenging scenario of visual domain adaptation, where both variables are continuous, KNIFE-based MI estimation also achieves superior results.
1.2 EXISTENT METHODS AND RELATED WORKS
DE estimation. Existing methods for estimating DE fit into one of three categories (Beirlant et al., 1997; Hlaváčková-Schindler et al., 2007; Verdú, 2019): plug-in estimates (Ahmad & Lin, 1976; Györfi & Van der Meulen, 1987), estimates based on sample-spacings (Tarasenko, 1968), and estimates based on nearest neighbor distances (Kozachenko & Leonenko, 1987; Tsybakov & Van der Meulen, 1996); (Berrett et al., 2019). Our proposed estimator falls into the first category and we will thus focus here on previous work using that methodology. Excellent summaries of all the available methods can be found in the works (Beirlant et al., 1997; Hlaváčková-Schindler et al., 2007; Wang et al., 2009; Verdú, 2019). In Ahmad & Lin (1976), a first nonparametric estimator of DE was suggested and theoretically analyzed. It builds on the idea of kernel density estimation using Parzen-Rosenblatt windowing (Rosenblatt, 1956; Parzen, 1962). More detailed analysis followed (Joe, 1989; Hall & Morton, 1993) but the estimator remained essentially unchanged. Unfortunately, this classical literature is mostly concerned with appropriate regularity conditions that guarantee asymptotic properties of estimators, such as (asymptotic) unbiasedness and consistency. Machine learning applications, however, usually deal with a fixed—often very limited—number of samples.
Differentiable DE estimation. A first estimator that employed a differential learning rule was introduced in Viola et al. (1996). Indeed, the estimator proposed therein is optimized using stochastic optimization, it only used a single kernel with a low number of parameters. An extension that uses a heteroscedastic kernel density estimate, i.e., using different kernels at different positions, has been proposed in Schraudolph (2004). Still the number of parameters was quite low and varying means in the kernels or variable weights were not considered. Although the estimation of DE remained a topic of major interest as illustrated by recent works focusing on special classes of distributions (Kolchinsky & Tracey, 2017; Chaubey & Vu, 2021) and nonparametric estimators (Sricharan et al., 2013; Kandasamy et al., 2015; Moon et al., 2021), the estimator introduced in Schraudolph (2004) was not further refined and hardly explored in recent works.
Differentiable MI estimation. In contrast, there has been a recent surge on new methods for the estimation of the closely related MI between two random variables. The most prominent examples include unnormalized energy-based variational lower bounds (Poole et al., 2019), the lower bounds developed in Nguyen et al. (2010) using variational characterization of f-divergence, the MINEestimator developed in Belghazi et al. (2018) from the Donsker-Varadhan representation of MI which can be also interpreted as an improvement of the plug-in estimator of Suzuki et al. (2008), the noise-contrastive based bound developed in van den Oord et al. (2018) and finally a contrastive upper bound (Cheng et al., 2020a). McAllester & Stratos (2020) point out shortcomings in other estimation strategies and introduce their own Differences of Entropies (DOE) method.
2 KNIFE
In this section we identify limitations of existing entropy estimators introduced in Schraudolph (2004); McAllester & Stratos (2020). Subsequently, we present KNIFE, which addresses these shortcomings.
2.1 LIMITATIONS OF EXISTING DIFFERENTIAL ENTROPY ESTIMATORS
Consider a continuous random vector X ∼ p in Rd. Our goal is to estimate the DE h(X) := − ∫ p(x) log p(x) dx. Given the intractability of this integral, we will rely on a Monte-Carlo estimate of h(X), using N i.i.d. samples Dx = {xn}Nn=1 to obtain
ĥORACLE(Dx) := − 1
N N∑ n=1 log p(xn). (1)
Unfortunately, assuming access to the true density p is often unrealistic, and we will thus construct an estimate p̂ that can then be plugged into (1) instead of p. If p̂ is smooth, the resulting plug-in estimator of DE is differentiable (R1).
Assuming access to an additional—ideally independent—set of M i.i.d. samples E = {x′m}Mm=1, we build upon the Parzen-Rosenblatt estimator (Rosenblatt, 1956; Parzen, 1962)
p̂(x;w, E) = 1 wdM M∑ m=1 κ ( x− x′m w ) , (2)
where w > 0 denotes the bandwidth and κ is a kernel density. The resulting entropy estimator when replacing p in (1) by (2) was analyzed in Ahmad & Lin (1976). In Schraudolph (2004), this approach was extended using the kernel estimator
p̂SCHRAU.(x; A, E) := 1
M M∑ m=1 κAm(x− x′m), (3)
where A := (A1, . . . , AM ) are (distinct, diagonal) covariance matrices and κA(x) = N (x; 0, A) is a centered Gaussian density with covariance matrix A.
The DOE method of McAllester & Stratos (2020) is a MI estimator that separately estimates a DE and a conditional DE. For DE, a simple Gaussian density estimate p̂DOE(x;θ) = κA(x− µ) is used, where θ = (A,µ) are the training parameters, the diagonal covariance matrix A and the mean µ.
While both SCHRAU. and DOE yield differentiable plug-in estimators for DE, they each have a major disadvantage. The strategy of Schraudolph (2004) fixes the kernel mean values at E , which implies that the method cannot adapt to a shifting input distribution (R3). On the other hand, DOE allows for rapid adaptation, but its simple structure makes it inadequate for the DE estimation of multi-modal densities. We illustrate these limitations in Section 3.1.
2.2 KNIFE ESTIMATOR
In KNIFE, the kernel density estimate is given by
p̂KNIFE(x;θ) := M∑ m=1 umκAm(x− am), (4)
where θ := (A,a,u) and the additional parameters 0 ≤ u = (u1, u2, . . . , uM ) with 1 · u = 1 and a = (a1, . . . , aM ) are introduced. Note that p̂KNIFE(x;θ) is a smooth function of θ, and so is our proposed plug-in estimator
ĥKNIFE(Dx;θ) := − 1
N N∑ n=1 log p̂KNIFE(xn;θ). (5)
KNIFE combines the ideas of Schraudolph (2004); McAllester & Stratos (2020). It is differentiable and able to adapt to shifting input distributions, while capable of matching multi-modal distributions. Thus, as we will see in synthetic experiments, incorporating um and shifts am in the optimization enables the use of KNIFE in non-stationary settings, where the distribution of X evolves over time.
Learning step: Stemming from the observation that, by the Law of Large Numbers (LLN),
ĥKNIFE(Dx,θ) LLN ≈ −E [ log p̂KNIFE(X;θ) ] = h(X) + DKL(p‖p̂KNIFE( · ;θ)) ≥ h(X), (6)
we propose to learn the parameters θ by minimizing ĥKNIFE, where E may be used to initialize a. Although not strictly equivalent due to the Monte-Carlo approximation, minimizing ĥKNIFE can be understood as minimizing the Kullback-Leibler (KL) divergence in (6), effectively minimizing the gap between ĥKNIFE and h(X). In fact, ĥKNIFE can also be interpreted as the standard maximum likelihood objective, widely used in modern machine learning. It is worth to mention that the KNIFE estimator is fully differentiable with respect to θ and the optimization can be tackled by any gradient-based method (e.g., Adam (Kingma & Ba, 2014) or AdamW (Loshchilov & Hutter, 2017)).
2.3 CONVERGENCE ANALYSIS
Note that the classical Parzen-Rosenblatt estimator ĥ(Dx;w), where (2) is plugged into (1), is a special case of KNIFE. Thus, the convergence analysis provided in (Ahmad & Lin, 1976, Theorem 1) also applies and yields sufficient conditions for ĥKNIFE(Dx,θ)→ h(X). In Appendix C, we extend this result and, assuming that the underlying distribution p is compactly supported on X = [0, 1]d and L-Lipschitz continuous, the following theorem is proved. Theorem 1. For any δ > 0, there exists a function ε(N,M,w) such that, with probability at least 1− δ,
∣∣ĥ(Dx;w)−h(X)∣∣ ≤ ε(N,M,w). Additionally, ε(N,M,w)→ 0 as M,N →∞ and w → 0 provided that
Nw → 0 and N 2 logN
w2dM → 0, (7)
where M and N denote the number of samples in E and Dx, respectively.
The precise assumptions for Theorem 1 and an explicit formula for ε(N,M,w) are given in Theorem 2 in Appendix C. For instance, Theorem 1 provides a bound on the speed of convergence for the consistency analysis in (Ahmad & Lin, 1976, Theorem 1).
2.4 ESTIMATING CONDITIONAL DIFFERENTIAL ENTROPY AND MUTUAL INFORMATION
Similar to (McAllester & Stratos, 2020), the proposed DE estimator can be used to estimate other information measures. In particular, we can use KNIFE to construct estimators of conditional DE and MI. When estimating the conditional DE and MI for a pair of random variables (X,Y ) ∼ p, we not only use Dx = {xn}Nn=1, but also the according i.i.d. samples Dy = {yn}Nn=1, where (xn, yn) are drawn according to p.
Conditional Differential Entropy. We estimate conditional DE h(X|Y ) by considering θ to be a parameterized function Θ(y) of y. Then all relations previously established naturally generalize and
p̂KNIFE(x|y; Θ) := p̂KNIFE(x; Θ(y)), ĥKNIFE(Dx|Dy; Θ) := − 1
N N∑ n=1 log p̂KNIFE(xn|yn; Θ). (8)
Naturally, minimization of (6) is now performed over the parameters of Θ. If Y is a continuous random variable, we use an artificial neural network Θ(y), taking y as its input. On the other hand, if Y ∈ Y is a discrete random variable, we have one parameter θ for each y ∈ Y , i.e., Θ = {θy}y∈Y and p̂KNIFE(x|y; Θ) = p̂KNIFE(x; Θ(y)) = p̂KNIFE(x;θy).
Mutual Information. To estimate the MI between random variables X and Y (either discrete or continuous), recall that MI can be written as I(X;Y ) = h(X) − h(X|Y ). Therefore, we use the marginal and conditional DE estimators (5) and (8) to build a KNIFE-based MI estimator
ÎKNIFE(Dx,Dy;θ,Θ) := ĥKNIFE(Dx;θ)− ĥKNIFE(Dx|Dy; Θ). (9)
3 EXPERIMENTS USING SYNTHETIC DATA
3.1 DIFFERENTIAL ENTROPY ESTIMATION
In this section we apply KNIFE for DE estimation, comparing it to (3), the method introduced in Schraudolph (2004), subsequently labeled “SCHRAU.”. It is worth to mention that we did not perform the Expectation Maximization algorithm, as suggested in (Schraudolph, 2004), but instead opted to use the same optimization technique as for KNIFE to facilitate a fair comparison.
3.1.1 GAUSSIAN DISTRIBUTION
As a sanity check, we test KNIFE on multivariate normal data in moderately high dimensions, comparing it to SCHRAU. and DOE, which we trained with the exact same parameters. We performed these experiments with d = 10 and d = 64 dimensional data. KNIFE yielded the lowest bias and variance in both cases, despite DOE being perfectly adapted to matching a multivariate Gaussian distribution. Additional details can be found in Appendix A.1.
In order to use a DE estimation primitive in a machine learning system, it must be able to adapt to a changing input distribution during training (R3). As already pointed out in Section 2.1, this is a severe limitation of SCHRAU., as re-drawing the kernel support E can be either impractical or at the very least requires a complete re-training of the entropy estimator. Whereas in (4), the kernel support a is trainable and it can thus adapt to a change of the input distribution. In order to showcase this ability, we utilize the approach of Cheng et al. (2020a) and successively decrease the entropy, observing how the estimator adapts. We perform this experiment with data of dimension d = 64 and repeatedly multiply the covariance matrix of the training vectors with a factor of a = 12 . The resulting entropy estimation is depicted in Figure 1. It is apparent that SCHRAU. suffers from a varying bias. The bias increases with decreasing variance, as the kernel support is fixed and cannot adapt as the variance of Dx shrinks. DOE is perfectly adapted to a single Gaussian distribution and performs similar to KNIFE.
3.1.2 TRIANGLE MIXTURE
KNIFE is able to cope with distributions that have multiple modes. While (3) is also capable of matching multi-modal distributions, DOE is unable to do so, as it approximates any distribution with a multivariate Gaussian. We illustrate this by matching a mixture of randomly drawn triangle distributions. The resulting estimated PDFs as well as the ground truth when estimating the entropy of a 1-dimensional mixture of triangles with 10 components can be observed in Figure 2 (left). With increasing dimension the difficulty of this estimation rises quickly as in d dimensions, the resulting PDF of independent c-component triangle mixtures has cd modes. To showcase the performance of KNIFE in this challenging task, we ran 10 training runs for DE estimation of 2-component triangle mixtures in 8 dimensions. An example training run is depicted in Figure 2 (right).
3.2 MUTUAL INFORMATION ESTIMATION
Multivariate Gauss We repeat the experiments in (Cheng et al., 2020a), stepping up the MI I(Xd;Y d) between d i.i.d. copies of joint normal random variables (X,Y ) by increasing their correlation coefficient, i.e., (X,Y ) are multivariate Gaussian with correlation coefficient ρi in the i-th epoch. A training run is depicted in the top of Figure 3. As in (Cheng et al., 2020a), we also repeat the experiment, applying a cubic transformation to Y . The estimation of MI between d i.i.d. copies of X and Y 3 can be observed in the middle row of Figure 3. The MI is unaffected by this bijective transformation. In Appendix A.3, the bias and variance are depicted separately.
Sum of Uniformly Distributed Variables In order to test the ability of KNIFE to adapt to distributions substantially different from the Gaussian kernel shape, we apply it in MI estimation of I(Xd;Y d) with uniformly distributed data. To this end, let X and E be centered, uniformly distributed random variables with E[X2] = E[E2] = 1 and define Y = ρiX+ √ 1− ρ2iE in the i-th epoch. One training run with d = 20 is shown in Figure 3 (bottom). Details about the source distribution as well as details of the experiments can be found in Appendix A.3.
4 EXPERIMENTS ON NATURAL DATA
In this section, we benchmark our proposed KNIFE-based MI estimator on three practical applications, spanning textual and visual data. We reproduce and compare our method to the most recent MI estimators including MINE (Belghazi et al., 2018), NWJ (Nguyen et al., 2010), InfoNCE (van den Oord et al., 2018), CLUB (Cheng et al., 2020a), and DOE (McAllester & Stratos, 2020). We do not explicitly include the SMILE estimator Song & Ermon (2019) in our comparison as it has the same gradient as NWJ.
Common notation: In all following applications, we will use Φψ : X → Z to denote an encoder, where X is the raw input space (i.e., texts or images), and Z denotes a lower dimensional continuous feature space. Additionally, we will use Cψ : Z → Y to denote a shallow classifier from the latent space Z to a discrete or continuous target space Y for classification or regression, respectively. We will use ψ to denote the parameters of both models, Φψ and Cψ . CE denotes the cross entropy loss.
4.1 INFORMATION BOTTLENECK FOR LANGUAGE MODEL FINETUNING
IB has recently been applied to fine-tune large-scale pretrained models (Mahabadi et al., 2021) such as BERT (Devlin et al., 2018) and aims at suppressing irrelevant features in order to reduce overfitting.
Problem statement. Given a textual input X ∈ X and a target label Y ∈ Y , the goal is to learn the encoder Φψ and classifier Cψ, such that Φψ(X) retains little information about X , while still producing discriminative features, allowing the prediction of Y . Thus, the loss of interest is:
L = λ · I(Φψ(X);X)︸ ︷︷ ︸ compression term − I(Φψ(X);Y )︸ ︷︷ ︸ downstream term , (10)
where λ controls the trade-off between the downstream and the compression terms.
Setup. Following Mahabadi et al. (2021) (relying on VUB), we work with the VIBERT model, which uses a Gaussian distribution as prior. Φψ is implemented as a stochastic encoder Φψ(X) = Z ∼ N (µψ(X),Σψ(X)). Details on the architecture of µψ and Σψ can be found in Appendix B. The classifier Cψ is composed of dense layers. To minimize L, the second part of the objective (10) is bounded using the variational bound from Barber & Agakov (2003). Since we use a Gaussian prior, h(Z|X) can be expressed in closed form.1 Thus, when using KNIFE, I(X;Z) = h(Z) − h(Z|X) can be estimated by using ĥKNIFE to estimate h(Z). We compare this KNIFE-based MI estimator with aforementioned MI estimators and the variational upper bound (VUB). For completeness, we also compare against a BERT model trained by direct minimization of a CE loss.
We closely follow the protocol of (Mahabadi et al., 2021) and work on the GLUE benchmark (Wang et al., 2018) originally composed of 5 datasets. However, following (Mahabadi et al., 2021), we choose to finetune neither on WNLI (Morgenstern & Ortiz, 2015) nor on CoLA (Warstadt et al., 2019) due to reported flaws in these datasets. The evaluation is carried out on the standard validation splits as the test splits are not available. Following standard practice (Liu et al., 2019; Yang et al., 2019), we report the accuracy and the F1 for MRPC, the accuracy for RTE and the Pearson and Spearman correlation coefficient for STS-B.
Results. Table 1 reports our results on the GLUE benchmark. We observe that KNIFE obtains the best results on all three datasets and the lowest variance on MRPC and STS-B. The use of a Gaussian prior in the stochastic encoder Φψ could explain the observed improvement of KNIFE-based estimation over MI-estimators such as CLUB, InfoNCE, MINE, DOE, or NWJ.
4.2 FAIR TEXTUAL CLASSIFICATION
In fair classification, we would like the model to take its decision without utilizing private information such as gender, age, or race. For this task, MI can be minimized to disentangle the output of the encoder Z and a private label S ∈ S (e.g., gender, age, or race).
1h(Z|X) = 1 2 ln |Σψ(X)|+ d2 ln(2πe), where d is the dimension of X and | · | denotes the determinant.
downstream task
+λ · I(Φψ(X);S)︸ ︷︷ ︸
disentangled
, (11)
where λ controls the trade-off between minimizing MI and CE loss. In this framework, a classifier is said to be fair or to achieve perfect privacy if no statistical information about S can be extracted from Φψ(X) by an adversarial classifier. Overall, a good model should achieve high accuracy on the main task (i.e., prediction of Y ) while removing information about the protected attribute S. This information is measured by training an offline classifier to recover the protected attribute S from Φψ(X).
Setup. We compute the second term of (11) with competing MI estimators, as well as the model from Elazar & Goldberg (2018), which will be referred to as “Adv”, as it utilizes an adversary to recover the private label from the latent representation Z. For KNIFE-based MI estimation, we use two DE estimators (as S is a binary label), following the approach outlined in Section 2.4. All derivations are detailed in Appendix B. We follow the experimental setting from Elazar & Goldberg (2018); Barrett et al. (2019) and use two datasets from the DIAL corpus (Blodgett et al., 2016) (over 50 million tweets) where the protected attribute S is the race and the main labels are sentiment or mention labels. The mention label indicates whether a tweet is conversational or not. We follow the official split using 160 000 tweets for training and two additional sets composed of 10 000 tweets each for development and testing. In all cases, the labels S and Y are binary and balanced, thus a random guess corresponds to 50% accuracy.
Results. Figure 4 gathers results on the fair classification task. The upper dashed lines represent the (private and main) task accuracies when training a model with only the CE loss (case λ = 0 in (11)). This shows that the learned encoding Φψ(X) contains information about the protected attribute, when training is only performed for the main task. On both the sentiment and mention task, we observe that a KNIFE-based estimator can achieve perfect privacy (see Figures 4b and 4d) with nearly no accuracy loss in the main task (see Figures 4a and 4c). The other MI estimators exhibit different behavior. For sentiment labels, most MI estimators fail to reach perfect privacy (CLUB, NWJ, DOE, and Adv) while others (InfoNCE) achieve perfect privacy while degrading the main task accuracy (10% loss on main accuracy). For mention labels, CLUB can also reach perfect privacy with almost no degradation of the accuracy of the main task. Overall, it is worth noting that KNIFE-based MI estimation enables better control of the degree of disentanglement than the reported baselines.
4.3 UNSUPERVISED DOMAIN ADAPTATION
In unsupervised domain adaptation, the goal is to transfer knowledge from the source domain (S) with a potentially large number of labeled examples to a target domain (T ), where only unlabeled examples are available.
Problem Statement. The learner is given access to labeled images from a source domain (xs, y) ∼ (XS , Y ) ∈ XS × Y and unlabeled images from a target domain xt ∼ XT ∈ XT . The goal is to
learn a classification model {Φψ, Cψ} that generalizes well to the target domain. Training models on the supervised source data only results in domain-specific latent representations Φψ(X) leading to poor generalization (when X is chosen randomly from {XS , XT }). In order to make the latent representations as domain-agnostic as possible, we follow the information-theoretic method proposed by Gholami et al. (2020), and used in Cheng et al. (2020a). The idea is to learn an additional binary model {Φdν , Cdν}, whose goal it is to guess the domain D ∈ {0, 1} of X . The latent representation learned by Φdν will therefore contain all the domain-specific information that we would like the main encoder Φψ to discard. In other words, we would like Φψ(X) and Φdν(X) to be completely disentangled, which naturally corresponds to the minimization of I(Φψ(X); Φdν(X)). Concretely, the domain classifier is trained to minimize the CE between domain labels D and its own predictions, whereas the main classifier is trained to properly classify support samples while minimizing the MI between Φψ(X) and Φdν(X). Using f d ν := C d ν ◦ Φdν and fψ := Cψ ◦ Φψ , the objectives are
min ν CE(D; fdν (X)) and min ψ CE(Y ; fψ(XS)) + λ · I(Φψ(X); Φdν(X)). (12)
Setup. The different MI estimators are compared based on their ability to guide training by estimating I(Φψ(X); Φdν(X)) in (12). We follow the setup of Cheng et al. (2020a) as closely as possible, and consider a total of 6 source/target scenarios formed with MNIST (LeCun & Cortes, 2010), MNIST-M (Ganin et al., 2016), SVHN (Netzer et al., 2011), CIFAR-10 (Krizhevsky et al., 2009), and STL-10 (Coates et al., 2011) datasets. We reproduce all methods and allocate the same budget for hyper-parameter tuning to every method. The exhaustive list of hyper-parameters can be found in Appendix B.
Results. Results are presented in Table 2. The KNIFE-based estimator is able to outperform MI estimators in this challenging scenario where both Φψ(X) and Φdν(X) are continuous.
5 CONCLUDING REMARKS
We introduced KNIFE, a fully learnable, differentiable kernel-based estimator of differential entropy, designed for deep learning applications. We constructed a mutual information estimator based on KNIFE and showcased several applications. KNIFE is a general purpose estimator and does not require any special properties of the learning problem. It can thus be incorporated as part of any training objective, where differential entropy or mutual information estimation is desired. In the case of mutual information, one random variable may even be discrete.
Despite the fundamental challenges in the problem of differential entropy estimation, beyond limitations arising from the use of a finite number of samples, KNIFE has demonstrated promising empirical results in various representation learning tasks.
Future work will focus on improving the confidence bounds given in Theorem 1. In particular, tailoring them towards KNIFE using tools from (Birge & Massart, 1995; Singh & Poczos, 2014). Another potential extension is direct estimation of the gradient of entropy, when p̂KNIFE(x;θ) has been learned (Mohamed et al., 2020; Song et al., 2020). This could be applied after the learning phase of KNIFE and is left for future work.
APPENDIX
A EXPERIMENTAL DETAILS OF EXPERIMENTS WITH SYNTHETIC DATA
Implementation of KNIFE in PyTorch (Paszke et al., 2019) is rather straightforward. The constraint on the weights u can be satisfied by applying a softmax transformation. The covariance matrices were parameterized by the lower-triangular factor in the Cholesky decomposition of the precision matrices, guaranteeing the definiteness constraint to be satisfied.
A.1 DIFFERENTIAL ENTROPY ESTIMATION OF GAUSSIAN DATA
In Section 3.1.1, the estimation of the entropy h(X) = d2 log 2πe for X ∼ N (0, Id) was performed with the hyperparameters given in Table 3. The mean error and its empirical standard deviation are reported in Table 5 over 20 runs, where an independently drawn evaluation set with the same size as the training set is used. At d = 10 we have the entropy h = d2 log 2πe = 14.19, while for the higher dimension, d = 64 we find h = 90.81.
In the experiment depicted in Figure 1, entropy is decreased after every epoch by letting Xi ∼ N (0, aiId), where i = 0, . . . , 4 is the epoch index. That is, Xi = √ aiGd, where G is a standard normal random variable, resulting in an decrease of the DE by ∆ = −d2 log a ≈ 22.18 for a = 1 2 with every epoch. We start at h(X0) = d2 log 2πe ≈ 90.81 and successively decrease until h(X4) = h(X0) + 4∆ ≈ 2.1. Additional parameters can be found in Table 4.
Computational Resources. Training was performed on an NVidia V100 GPU. Taken together, training for the first experiments of entropy estimation in dimensions d = 10, 64, as well as the experiment depicted in Figure 1 used GPU time of less than 5 minutes.
A.2 DIFFERENTIAL ENTROPY ESTIMATION OF TRIANGLE MIXTURES
In Section 3.1.2, we perform an estimation of the entropy of c-component triangle mixture distributions. The PDF of such a c-component triangle-mixture, is given by
p(x) = c∑ i=1 wiΛsi ( x− i− 1 2 ) , (13)
where Λs(x) := 1s max{0, 2 − 4s|x|} is a centered triangle PDF with width s > 0. The scales s = (s1, . . . , sc) and weights w = (w1, . . . , wc) satisfy 0 < si, wi < 1 and ∑c i=1 wi = 1. Before the experiment, we choose w uniformly at random from the c-probability simplex and the scales are chosen uniformly at random in [0.1, 1.0]. An example for c = 10 is the true PDF depicted in Figure 2
(left). For d > 1, we perform the estimation on d i.i.d. copies. Note that the triangle mixture with c components in d-dimensional space has cd modes, i.e., the support can be partitioned into cd disjoint components.
The parameters of the experiment yielding Figure 2 (left) are given in Table 6, while the details of the experiment depicted in Figure 2 (right) can be found in Table 7. In the latter experiment, over ten runs, entropy was estimated to an accuracy of 1.6563± 0.8528 by KNIFE, accurate to 2.4445± 0.5439 using (3) and with an accuracy of 7.1070± 2.7984 by DOE. This is the mean absolute error and its empirical standard deviation over all 10 runs, where the evaluation set was drawn independently from the training set and has the same size as the training set.
Computational Resources. Training was performed on an NVidia V100 GPU. Training in d = 1 dimension, that resulted in Figure 2 (left) can be performed in seconds, while all training required for producing Figure 2 (right) used approximately 1.5 hours of GPU time.
A.3 MUTUAL INFORMATION ESTIMATION
In Section 3.2, we estimate I(Xd;Y d) and I(Xd; (Y 3)d) where (X,Y ) are multivariate correlated Gaussian distributions with correlation coefficient ρi in the i-th epoch. Subsequently, we estimate I(Xd;Y d) where X,E ∼ U [− √ 3, √ 3] are independent and Y is given by Y = ρiX + √
1− ρ2iE. In both cases, ρi is chosen such that I(Xd;Y d) = 2i in the i-th epoch.
All neural networks are randomly initialized. The bias, variance, and MSE during training as a function of the MI, can be observed in Figure 5.
The estimation is performed in 10 runs, randomly choosing the training meta-parameters as proposed by McAllester & Stratos (2020). In Figure 3 (bottom), we present the best run for each method, selected by distance from the true MI at the end of training. The bias, variance, and MSE during training, as a function of the MI, can be observed in Figure 6. Details about the source distribution as well as details of the experiments can be found in Table 8. During experimentation it turned out to be
beneficial to train the parameters Θ and θ in (9) separately and substantially increase the learning rate for the training of θ. Thus, we increase the learning rate for the training of θ by a factor of 103.
Model Architecture for Θ. We utilize the feed-forward architecture, also used in McAllester & Stratos (2020). It is a simple architecture with two linear layers, one hidden layer using tanh activation, immediately followed by an output layer. The number of neurons in the hidden layer is a meta-parameter selected randomly from {64, 128, 256} for each training run. Three models with this architecture are used for the three parameters (A,a,u), as described by (4), where only the output dimension is changed to fit the parameter dimension.
Computational Resources. Training was performed, using about 6 hours of GPU time on an NVidia V100 GPU to carry out the experiment depicted in Figure 3 (bottom).
B EXPERIMENTAL DETAILS OF EXPERIMENTS ON NATURAL DATA
B.1 ON THE PARAMETER UPDATE
In Section 4, we rely on two different types of models: pretrained (e.g., fine tuning with VIBERT) and randomly initialized (e.g., in fair classification and domain adaptation). When working with randomly initialized networks the parameters are updated. However, it is worth noting that in the literature the pretrained model parameters (i.e. ψ) are not always updated (see Ravfogel et al. (2020)). In our experiments: (i) We always update the parameters (even for pretrained models), and (ii) we did not change the way the parameters were updated in concurrent works (to ensure fair comparison). Specifically,
• for language model finetuning (Appendix B.2), we followed Mahabadi et al. (2021) and did a joint update;
• for the fair classification task (Appendix B.3), we followed common practice and used the algorithm described in Algorithm 1 which rely on an alternated update;
• for the domain adaptation task (Appendix B.4), we followed common practice and used a joint method.
B.2 INFORMATION BOTTLENECK FOR LANGUAGE MODEL FINETUNING
For this experiment we follow the experimental setting introduced in Mahabadi et al. (2021) and work with the GLUE data2.
Model Architecture. We report in Table 9, the multilayer perceptron (MLP) used to compute the compressed sentence representations produced by BERT. Variance and Mean MLP networks are composed of fully connected layers.
2see https://gluebenchmark.com/faq
Algorithm 1 Disentanglement using a MI-based regularizer 1: INPUT Labelled training set D = {(xj , sj , yj)∀j ∈ [n+ 1, N ]}; independent set of samples E ; θ parameters KNIFE; ψ parameters of network.
2: INITIALIZE parameters θ, ψ 3: OPTIMIZATION 4: while (θ, ψ) not converged do 5: for i ∈ [1,Unroll] do . Learning Step for KNIFE 6: Sample a batch B from E 7: Update θ using ((9)). 8: end for 9: Sample a batch B′ from D
10: Update θ with B′ ((11)). 11: end while 12: OUTPUT Encoder and classifier weights ψ
Table 10: Experimental details on Information Bottleneck.
Parameter Value
Learning Rate See Appendix B.2 Optimizer AdamW Warmup Steps 0.0 Dropout 0.0
Batch Size 32
Model Training. For model training, all models are trained for 6 epochs and we use early stopping (best model is selected on validation set error). For IB, λ is selected in {10−4, 10−5, 10−6} and K is selected in {144, 192, 288, 384}. We follow (Alemi et al., 2016) where the posterior is averaged over 5 samples and a linear annealing schedule is used for λ. Additional hyper-parameters are reported in Table 10.
Dataset Statistics. Table 11 reports the statistics of the dataset used in our finetuning experiment.
Computational Resources. For all these experiments we rely on NVidia-P100 with 16GB of RAM. To complete the full grid-search on 10 seeds and on the three datasets, approximately 1.5k hours are required.
B.3 FAIR TEXTUAL CLASSIFICATION
In this section, we gather the experimental details for the textual fair classification task.
B.3.1 DETAILS OF THE KNIFE-BASED ESTIMATOR
In this experiment, we estimate the MI between a continuous random variable, namely Z = Φψ(X), and a discrete variable, denoted by S ∈ S = {1, 2, . . . , |S|}. We follow the strategy outlined in Section 2.4 for estimating the conditional DE h(Z|S). However, we will reuse the estimate of the conditional PDF p̂(z|s; Θ) to compute an estimate of the DE as
h(Z) ≈ − 1 N N∑ n=1 log (∑ s∈S p̂KNIFE(zn|s; Θ)p̂(s) ) , (14)
where p̂(s) = 1N |{n : sn = s}| is used to indicate the empirical distribution of S in the training set Ds.3 In our experiments, with |S| = 2, we found that estimating the DE h(Z) based on the KNIFE estimator learnt for h(Z|S) increases the stability of training. We adopted the same strategy for DOE.
B.3.2 EXPERIMENTAL DETAILS
Model Architecture. For the encoder, we use a bidirectionnal GRU with two layers with hidden and input dimension set to 128. We use LeakyReLU as the activation function. The classification head is composed of fully connected layers of input dimension 256. We use a learning rate of 0.0001 for AdamW. The dropout rate is set to 0.2. The number of warmup steps is set to 1000.
3As we work with balanced batches, we will have p̂(s) = 1|S| .
Computational Resources. For all these experiments, we rely on NVIDIA-P100 with 16GB of RAM. Each model is trained for 30k steps. The model with the lowest MI is selected. The training of a single network takes around 3 hours.
B.4 UNSUPERVISED DOMAIN ADAPTATION
We follow the experimental setup given in Cheng et al. (2020a) as closely as possible, i.e., we pick hyperparameters given in the paper, or if not provided, those set in the code:4
Model Training. We use Adam optimizer for all modules with a learning rate of 0.001. Batch size is set to 128. We set the weighting parameter λ = 0.1. The original code of Cheng et al. (2020a) uses 15 000 training iterations, but we found most methods had not properly converged at this stage, and hence use 25 000 iterations instead. Similar to other experiments, we set the kernel size M = 128.
Model Architecture. Table 12 summarizes the architectures used for the different modules. For the MI network of each method, the best configuration, based on the validation set of the first task MNIST→MNIST-M, is chosen among 4 configurations: with or without LayerNorm and with ReLU or tanh activation.
Computational Resources. For these experiments, we used a cluster of NVIDIA-V100 with 16GB of RAM. Each training (i.e., 25k iterations) on a single task requires on average 2 hours. Given that we have 6 tasks, and repeat the training for 3 different seeds, on average 36 hours computation time is required for each method.
C BOUNDING THE ERROR
In the following, fix L > 0 and let PL be the set of L-Lipschitz PDFs supported5 on X := [0, 1]d, i.e., ∫ X p(x) dx = 1, and
∀x, y ∈ Rd : |p(x)− p(y)| ≤ L‖x− y‖ (15) for p ∈ PL, where6 ‖x‖ := ∑ k |xk|.
Assume p ∈ PL and let κ be a PDF supported on X . In order to show that estimation of h(X) is achievable, we use a standard Parzen-Rosenblatt estimator p̂(x;w) := 1
Mwd ∑M m=1 κ (x−X′m w ) , as
in (2). The entropy estimate is then defined by the empirical average
ĥ(Dx;w) := − 1
N N∑ n=1 log p̂(Xn;w). (16)
Further, define the following quantities, which are assumed to be finite:
pmax := max{p(x) : x ∈ X}, (17)
C1 :=
∫ p(x) log2 p(x)dx, (18)
C2 := L
∫ ‖u‖κ(u)du, (19)
Kmax := max{κ(x) : x ∈ X}. (20) Note that it is easily seen that pmax ≤ L2 and C1 ≤ max { pmax log 2 pmax, 4e −2} by our assumptions. The requirement C2,Kmax <∞ represents a mild condition on the kernel function κ. We can now show the following.
4https://github.com/Linear95/CLUB/tree/master/MI_DA. 5Any known compact support suffices. An affine transformation then yields X = [0, 1]d, while possibly resulting in a different Lipschitz constant. 6The `1 norm is chosen to facilitate subsequent computations. By the equivalence of norms on Rd, any norm suffices.
Convolution sequence
Noisy downsampling
Theorem 2. With probability greater than 1− δ we have
|h(X)− ĥ(Dx;w)| ≤ − log 1− 3NKmax wdδ √ log 6Nδ 2M − 3NC2w δ +√3C1 Nδ , (21)
if the expression in the logarithm is positive.
In particular, the estimation error approaches zero asN →∞ ifw = w(N)→ 0,M = M(N)→∞ are chosen such that
Nw → 0, (22) N2 logN
w2dM → 0. (23)
We prove Theorem 2 in several Lemmas.
Lemma 3. Fix δ > 0 and x0 ∈ X . Then, with probability greater than 1− δ,
|p(x0)− p̂(x0)| ≤ Kmax wd √ log 2δ 2M + C2w. (24)
Proof. First, we can show that
|E[p̂(x0)]− p(x0)| = ∣∣∣∣∣ 1Mwd M∑ m=1 ∫ κ ( x0 − x w ) p(x)dx− p(x0) ∣∣∣∣∣ (25) = ∣∣∣∣ 1wd ∫ κ ( x0 − x w ) p(x)dx− p(x0)
∣∣∣∣ (26) =
∣∣∣∣∫ κ (u) p(x0 − wu)du− p(x0)∣∣∣∣ (27) =
∣∣∣∣∫ κ (u) [p(x0 − wu)− p(x0)]du∣∣∣∣ (28) ≤ ∫ κ (u) |p(x0 − wu)− p(x0)|du (29)
≤ ∫ κ (u)Lw‖u‖du (30)
= wC2. (31)
Next, note that
|E[p̂(x0)]− p̂(x0)| ≤ Kmax wd √ log 2δ 2M
(32)
holds with probability greater than 1− δ as the requirements of McDiarmid’s inequality (Paninski, 2003, Sec. 3) are satisfied with cj = KmaxMwd and thus P{|E[p̂(x0)]− p̂(x0)| ≥ ε} ≤ δ with
ε = Kmax wd √ log 2δ 2M . (33)
Combining (31) and (32) gives (24).
Lemma 4. For any continuous random variable X supported on X and a ≥ 0, we have
P{p(X) ≤ a} ≤ a. (34)
Proof. We apply Markov’s inequality to the random variable Y = 1p(X) and observe that
P{p(X) ≤ a} = P{Y ≥ a−1} ≤ vol(X )a = a. (35)
Lemma 5. If x > 0, y ≥ a > 0, 0 < a < 1, and |x− y| ≤ δ < a, then
| log x− log y| ≤ log a a− δ
= − log (
1− δ a
) . (36)
Proof. Case x ≥ y. We can write y = a+ b and x = y+ c = a+ b+ c for b ≥ 0 and 0 ≤ c ≤ δ < a.∣∣∣∣log xy ∣∣∣∣ = log(1 + ca+ b ) (37)
≤ log ( 1 + c
a
) ≤ log ( 1 + δ
a
) . (38)
Furthermore,
log
( a
a− δ
) − log ( 1 + δ
a
) = log
1
(a+ δ)(a− δ) (39)
= log 1
a2 − δ2 (40)
≥ log 1 a2 = −2 log a > 0. (41)
Case x < y. Here, we can write y = a+ b and x = y − c = a+ b− c for b ≥ 0 and 0 ≤ c ≤ δ < a.∣∣∣∣log xy ∣∣∣∣ = log yx (42)
= log
( a+ b
a+ b− c
) (43)
≤ log ( a
a− c
) (44)
≤ log ( a
a− δ
) = − log ( 1− δ
a
) . (45)
Proof of Theorem 2. We apply Lemma 3 N times and use the union bound to show that with probability greater than 1− δ3 we have for every n ∈ [N ]
|p(Xn)− p̂(Xn)| ≤ Kmax wd
√ log 6Nδ
2M + C2w. (46)
Similarly, by Lemma 4, we have with probability greater than 1− δ3 that
p(Xn) ≥ δ
3N (47)
for all n ∈ [N ].
Again by the union bound, we have that with probability greater than 1− 2δ3 both (46) and (47) hold for all n ∈ [N ], and thus, by Lemma 5, we obtain∣∣∣∣∣ĥ(Dx;w) + 1N N∑ n=1 log p(Xn) ∣∣∣∣∣ = ∣∣∣∣∣ 1N N∑ n=1 log p(Xn) p̂(Xn)
∣∣∣∣∣ (48) ≤ − log 1− Kmaxwd √ log 6Nδ 2M + C2w δ
3N (49) = − log 1− 3NKmax wdδ √ log 6Nδ 2M − 3NC2w δ , (50)
provided the argument in the logarithm is positive. Finally, we have the upper bound on the variance
E (h(X) + 1 N N∑ n=1 log p(Xn) )2 = 1 N2 N∑ n=1 E[(h(X) + log p(X))2] (51)
= 1
N (E[log2 p(X)]− h(X)2) (52) ≤ 1 N C1 (53)
and apply Chebychev’s inequality, showing that with probability greater than 1− δ3 ,∣∣∣∣∣h(X) + 1N N∑ n=1 log p(Xn) ∣∣∣∣∣ ≤ √ 3C1 Nδ . (54)
The union bound and the triangle inequality applied to (50) and (54) yields the desired result.
D LIBRARIES USED
For our experiments, we built upon code from the following sources.
• VIBERT (Mahabadi et al., 2021) at github.com/rabeehk/vibert. • TRANSFORMERS (Wolf et al., 2019) at github.com/huggingface/transformers. • DOE (McAllester & Stratos, 2020) at github.com/karlstratos/doe. • SMILE (Song & Ermon, 2019) at github.com/ermongroup/smile-mi-estimator. • InfoNCE, MINE, NWJ, CLUB (Cheng et al., 2020a) at github.com/Linear95/CLUB. | 1. What is the focus of the paper regarding representation learning?
2. What are the strengths and weaknesses of the proposed method compared to previous works?
3. How does the reviewer assess the consistency and usefulness of the results in downstream tasks?
4. Are there any concerns or suggestions for future comparisons with related works in the field? | Summary Of The Paper
Review | Summary Of The Paper
This work proposes an entropy / mutual information estimator that is suitable for representation learning, by extending the Parzen-Rosenblatt estimator with learnable centroids, bandwidth matrices and coefficients. The authors prove consistency, and demonstrate that on various downstream tasks where MI-based regularization is needed, the proposed method outperforms previous work on entropy / MI estimation.
Review
Original Review
The methodological novelty of this work is the learnable weighting coefficients and centroids, i.e., modifying the Parzen estimator
1
M
∑
i
k
A
i
(
⋅
−
x
i
)
as
∑
i
u
i
k
A
i
(
⋅
−
a
i
)
, where the parameters
(
u
i
,
a
i
)
are chosen to minimize the KL divergence w.r.t. the data distribution, and
A
i
is the learnable bandwidth. It is argued that, in ML tasks where the data
x
i
are the learned representations and change during training, the modification allows the estimator to be both flexible and efficient.
The argument will be reasonable in problems where learnable bandwidth to be necessary, yet more flexible estimation methods (e.g. with neural energy based models) are too expensive. The strength of the method mostly lies in its practicality (in representation learning tasks): it does not provide proper lower or upper bounds for mutual information, which is the central quantity of interest in the downstream tasks. The novelty is limited, but this is fine if there is consistent improvement in empirical performance.
I don't have any major issues with this work. There are nonetheless a few questions that need clarification:
Consistency of the results with previous work:
Table 1 doesn't appear consistent with any tables in Mahabadi et al (2021), e.g. in the MRPC experiments the accuracy is consistently higher than F1 in Mahabadi et al (2021), but not here.
Table 2 doesn't appear consistent with Table 2 in Cheng et al (2020a), the baseline performance here seems consistently worse, and the difference is significant with the only exception of M->U/U->M.
In both cases, this paper claims to follow the setup in the corresponding previous work closely, so an explanation on the difference will be appreciated.
Usefulness of MI regularization in downstream tasks: while I can imagine MI-like/-inspired regularization can be useful, I'm less certain if accurate estimation of MI always translate to improvements. For example, from a quick scan it appears Mahabadi et al (2021) used a very heuristic estimation of MI, by fitting multivariate Gaussians on the joint and conditional distributions, yet demonstrated a similar level of improvement in GLUE as here. While the difference in experiment setup prevents any definitive conclusion, it would be convincing if the authors could implement the heuristic in Mahabadi et al (2021) in the GLUE experiment here and report the performance.
For entropy/MI regularization for downstream tasks, the gradient (score) estimators in the variational inference literature may serve the same purpose. See, for example, Section 9.6 in Mohamed et al (2020). While evaluation may be difficult to fit in the rebuttal timeline, it would greatly strengthen the paper if the authors could eventually compare with some works in this line, e.g., Song et al (2019) and Zhou et al (2020), as they usually claim applicability in problems with a similar complexity.
References:
Mohamed et al (2020), Monte Carlo Gradient Estimation in Machine Learning, in JMLR.
Song et al (2019), Sliced Score Matching: A Scalable Approach to Density and Score Estimation, in UAI.
Zhou et al (2020), Nonparametric Score Estimators, in ICML.
Post-rebuttal Update
The authors' response addressed my questions about the experiments and the updated proof appears correct. The contribution of this work is largely empirical -- the listed requirements and the new estimator do not appear very novel to me, although it is understandable that the characteristics of the downstream tasks may prevent the development of more flexible methods. The experiments clearly demonstrate improvements over past MI-based methods. However, as I'm not familiar with the evaluation tasks, I cannot evaluate their significance in the broader context; that task will have to be left to the other reviewers. Therefore, I'm changing my score to 6, in light of the resolved questions about the theory and experiments. |
ICLR | Title
KNIFE: Kernelized-Neural Differential Entropy Estimation
Abstract
Estimation of (differential) entropy and the related mutual information has been pursued with significant efforts by the machine learning community. To address shortcomings in previously proposed estimators for differential entropy, here we introduce KNIFE, a fully parameterized, differentiable kernel-based estimator of differential entropy. The flexibility of our approach also allows us to construct KNIFE-based estimators for conditional (on either discrete or continuous variables) differential entropy, as well as mutual information. We empirically validate our method on high-dimensional synthetic data and further apply it to guide the training of neural networks for real-world tasks. Our experiments on a large variety of tasks, including visual domain adaptation, textual fair classification, and textual fine-tuning demonstrate the effectiveness of KNIFE-based estimation.
1 INTRODUCTION
Learning tasks requires information (Principe et al., 2006) in the form of training data. Thus, information measures (Shannon, 1948) (e.g. entropy, conditional entropy and mutual information) have been a source of inspiration for the design of learning objectives in modern machine learning (ML) models (Linsker, 1989; Torkkola, 2006). Over the years, a plethora of estimators have been introduced to estimate the value of the aforementioned measures of information and they have been applied to many different problems, including information and coding theory, limiting distributions, model selection, design of experiment and optimal prior distribution, data disclosure, and relative importance of predictors (Ebrahimi et al., 2010). In these applications, traditional research focused on both developing new estimators and obtaining provable guarantees on the asymptotic behavior of these estimators (Liu et al., 2012; Verdú, 2019).
However, when used for training deep neural networks, additional requirements need to be satisfied. In particular, the estimator needs to be differentiable w.r.t. the data distribution (R1), computationally tractable (R2), and rapidly adapt to changes in the underlying distribution (R3). For instance, Mutual Information (MI), a fundamental measure of dependence between variables, only became a popular (standalone or regularizing) learning objective for DNNs once estimators satisfying the above requirements were proposed (Poole et al., 2019; Barber & Agakov, 2003). Although MI is notoriously difficult to estimate in high dimensions (Kraskov et al., 2004; Pichler et al., 2020; McAllester & Stratos, 2020), these estimators have demonstrated promising empirical results in unsupervised representation learning (Krause et al., 2010; Bridle et al., 1992; Hjelm et al., 2019; Tschannen et al., 2020), discrete/invariant representations (Hu et al., 2017; Ji et al., 2019), generative modelling (Chen et al., 2016; Zhao et al., 2017), textual disentangling (Cheng et al., 2020b; Colombo et al., 2021), and applications of the Information Bottleneck (IB) method (Mahabadi et al., 2021; Devlin et al., 2018; Alemi et al., 2016) among others. Compared to MI, Differential Entropy (DE) has received less attention from the ML community while also having interesting applications.
In this paper, we focus on the problem of DE estimation as this quantity naturally appears in many applications (e.g. reinforcement learning (Shyam et al., 2019; Hazan et al., 2019; Ahmed et al., 2019; Kim et al., 2019), IB (Alemi et al., 2016), mode collapse (Belghazi et al., 2018)). Traditional estimators of DE often violate at least one of the requirements (R1) – (R3) listed above (e.g. knearest neighbor based estimators violate (R1)). As a consequence, the absence of DE estimator for arbitrary data distributions forces deep learning researchers to either restrict themselves to special cases where closed-form expressions for DE are available (Shyam et al., 2019) or use MI as a proxy
(Belghazi et al., 2018). In this work, we introduce a Kernelized Neural dIFferential Entropy (KNIFE) estimator, that satisfies the aforementioned requirements and addresses limitations of existing DE estimators (Schraudolph, 2004; McAllester & Stratos, 2020). Stemming from recent theoretical insights (McAllester & Stratos, 2020) that justify the use of DE estimators as building blocks to better estimate MI, we further apply KNIFE to MI estimation. In the context of deep neural networks with high dimensional data (e.g. image, text), KNIFE achieves competitive empirical results in applications where DE or MI is required.
1.1 CONTRIBUTIONS
Our work advances methods in DE and MI estimation in several ways.
1. We showcase limitation of the existing DE estimators proposed in Schraudolph (2004); McAllester & Stratos (2020) with respect to desirable properties required for training deep neural networks. To address these shortcomings, we introduce KNIFE, a fully learnable kernel-based estimator of DE. The flexibility of KNIFE allows us to construct KNIFE-based estimators for conditional DE, conditioning on either a discrete or continuous random variable. 2. We prove learnability under natural conditions on the underlying probability distribution. By requiring a fixed Lipschitz condition and bounded support we are not only able to provide an asymptotic result, but also a confidence bound in the case of a finite training set. This extends the consistency result by Ahmad & Lin (1976). 3. We validate on synthetic datasets (including multi-modal, non-Gaussian distributions), that KNIFE addresses the identified limitations and outperforms existing methods on both DE and MI estimation. In particular, KNIFE more rapidly adapts to changes in the underlying data distribution. 4. We conduct extensive experiments on natural datasets (including text and images) to compare KNIFE-based MI estimators to most recent MI estimators. First, we apply KNIFE in the IB principle to fine-tune a pretrained language model. Using KNIFE, we leverage a closed-form expression of a part of the training objective and achieve the best scores among competing MI estimators. Second, on fair textual classification, the KNIFE-based MI estimator achieves near perfect disentanglement (with respect to the private, discrete label) at virtually no degradation of accuracy in the main task. Lastly, in the challenging scenario of visual domain adaptation, where both variables are continuous, KNIFE-based MI estimation also achieves superior results.
1.2 EXISTENT METHODS AND RELATED WORKS
DE estimation. Existing methods for estimating DE fit into one of three categories (Beirlant et al., 1997; Hlaváčková-Schindler et al., 2007; Verdú, 2019): plug-in estimates (Ahmad & Lin, 1976; Györfi & Van der Meulen, 1987), estimates based on sample-spacings (Tarasenko, 1968), and estimates based on nearest neighbor distances (Kozachenko & Leonenko, 1987; Tsybakov & Van der Meulen, 1996); (Berrett et al., 2019). Our proposed estimator falls into the first category and we will thus focus here on previous work using that methodology. Excellent summaries of all the available methods can be found in the works (Beirlant et al., 1997; Hlaváčková-Schindler et al., 2007; Wang et al., 2009; Verdú, 2019). In Ahmad & Lin (1976), a first nonparametric estimator of DE was suggested and theoretically analyzed. It builds on the idea of kernel density estimation using Parzen-Rosenblatt windowing (Rosenblatt, 1956; Parzen, 1962). More detailed analysis followed (Joe, 1989; Hall & Morton, 1993) but the estimator remained essentially unchanged. Unfortunately, this classical literature is mostly concerned with appropriate regularity conditions that guarantee asymptotic properties of estimators, such as (asymptotic) unbiasedness and consistency. Machine learning applications, however, usually deal with a fixed—often very limited—number of samples.
Differentiable DE estimation. A first estimator that employed a differential learning rule was introduced in Viola et al. (1996). Indeed, the estimator proposed therein is optimized using stochastic optimization, it only used a single kernel with a low number of parameters. An extension that uses a heteroscedastic kernel density estimate, i.e., using different kernels at different positions, has been proposed in Schraudolph (2004). Still the number of parameters was quite low and varying means in the kernels or variable weights were not considered. Although the estimation of DE remained a topic of major interest as illustrated by recent works focusing on special classes of distributions (Kolchinsky & Tracey, 2017; Chaubey & Vu, 2021) and nonparametric estimators (Sricharan et al., 2013; Kandasamy et al., 2015; Moon et al., 2021), the estimator introduced in Schraudolph (2004) was not further refined and hardly explored in recent works.
Differentiable MI estimation. In contrast, there has been a recent surge on new methods for the estimation of the closely related MI between two random variables. The most prominent examples include unnormalized energy-based variational lower bounds (Poole et al., 2019), the lower bounds developed in Nguyen et al. (2010) using variational characterization of f-divergence, the MINEestimator developed in Belghazi et al. (2018) from the Donsker-Varadhan representation of MI which can be also interpreted as an improvement of the plug-in estimator of Suzuki et al. (2008), the noise-contrastive based bound developed in van den Oord et al. (2018) and finally a contrastive upper bound (Cheng et al., 2020a). McAllester & Stratos (2020) point out shortcomings in other estimation strategies and introduce their own Differences of Entropies (DOE) method.
2 KNIFE
In this section we identify limitations of existing entropy estimators introduced in Schraudolph (2004); McAllester & Stratos (2020). Subsequently, we present KNIFE, which addresses these shortcomings.
2.1 LIMITATIONS OF EXISTING DIFFERENTIAL ENTROPY ESTIMATORS
Consider a continuous random vector X ∼ p in Rd. Our goal is to estimate the DE h(X) := − ∫ p(x) log p(x) dx. Given the intractability of this integral, we will rely on a Monte-Carlo estimate of h(X), using N i.i.d. samples Dx = {xn}Nn=1 to obtain
ĥORACLE(Dx) := − 1
N N∑ n=1 log p(xn). (1)
Unfortunately, assuming access to the true density p is often unrealistic, and we will thus construct an estimate p̂ that can then be plugged into (1) instead of p. If p̂ is smooth, the resulting plug-in estimator of DE is differentiable (R1).
Assuming access to an additional—ideally independent—set of M i.i.d. samples E = {x′m}Mm=1, we build upon the Parzen-Rosenblatt estimator (Rosenblatt, 1956; Parzen, 1962)
p̂(x;w, E) = 1 wdM M∑ m=1 κ ( x− x′m w ) , (2)
where w > 0 denotes the bandwidth and κ is a kernel density. The resulting entropy estimator when replacing p in (1) by (2) was analyzed in Ahmad & Lin (1976). In Schraudolph (2004), this approach was extended using the kernel estimator
p̂SCHRAU.(x; A, E) := 1
M M∑ m=1 κAm(x− x′m), (3)
where A := (A1, . . . , AM ) are (distinct, diagonal) covariance matrices and κA(x) = N (x; 0, A) is a centered Gaussian density with covariance matrix A.
The DOE method of McAllester & Stratos (2020) is a MI estimator that separately estimates a DE and a conditional DE. For DE, a simple Gaussian density estimate p̂DOE(x;θ) = κA(x− µ) is used, where θ = (A,µ) are the training parameters, the diagonal covariance matrix A and the mean µ.
While both SCHRAU. and DOE yield differentiable plug-in estimators for DE, they each have a major disadvantage. The strategy of Schraudolph (2004) fixes the kernel mean values at E , which implies that the method cannot adapt to a shifting input distribution (R3). On the other hand, DOE allows for rapid adaptation, but its simple structure makes it inadequate for the DE estimation of multi-modal densities. We illustrate these limitations in Section 3.1.
2.2 KNIFE ESTIMATOR
In KNIFE, the kernel density estimate is given by
p̂KNIFE(x;θ) := M∑ m=1 umκAm(x− am), (4)
where θ := (A,a,u) and the additional parameters 0 ≤ u = (u1, u2, . . . , uM ) with 1 · u = 1 and a = (a1, . . . , aM ) are introduced. Note that p̂KNIFE(x;θ) is a smooth function of θ, and so is our proposed plug-in estimator
ĥKNIFE(Dx;θ) := − 1
N N∑ n=1 log p̂KNIFE(xn;θ). (5)
KNIFE combines the ideas of Schraudolph (2004); McAllester & Stratos (2020). It is differentiable and able to adapt to shifting input distributions, while capable of matching multi-modal distributions. Thus, as we will see in synthetic experiments, incorporating um and shifts am in the optimization enables the use of KNIFE in non-stationary settings, where the distribution of X evolves over time.
Learning step: Stemming from the observation that, by the Law of Large Numbers (LLN),
ĥKNIFE(Dx,θ) LLN ≈ −E [ log p̂KNIFE(X;θ) ] = h(X) + DKL(p‖p̂KNIFE( · ;θ)) ≥ h(X), (6)
we propose to learn the parameters θ by minimizing ĥKNIFE, where E may be used to initialize a. Although not strictly equivalent due to the Monte-Carlo approximation, minimizing ĥKNIFE can be understood as minimizing the Kullback-Leibler (KL) divergence in (6), effectively minimizing the gap between ĥKNIFE and h(X). In fact, ĥKNIFE can also be interpreted as the standard maximum likelihood objective, widely used in modern machine learning. It is worth to mention that the KNIFE estimator is fully differentiable with respect to θ and the optimization can be tackled by any gradient-based method (e.g., Adam (Kingma & Ba, 2014) or AdamW (Loshchilov & Hutter, 2017)).
2.3 CONVERGENCE ANALYSIS
Note that the classical Parzen-Rosenblatt estimator ĥ(Dx;w), where (2) is plugged into (1), is a special case of KNIFE. Thus, the convergence analysis provided in (Ahmad & Lin, 1976, Theorem 1) also applies and yields sufficient conditions for ĥKNIFE(Dx,θ)→ h(X). In Appendix C, we extend this result and, assuming that the underlying distribution p is compactly supported on X = [0, 1]d and L-Lipschitz continuous, the following theorem is proved. Theorem 1. For any δ > 0, there exists a function ε(N,M,w) such that, with probability at least 1− δ,
∣∣ĥ(Dx;w)−h(X)∣∣ ≤ ε(N,M,w). Additionally, ε(N,M,w)→ 0 as M,N →∞ and w → 0 provided that
Nw → 0 and N 2 logN
w2dM → 0, (7)
where M and N denote the number of samples in E and Dx, respectively.
The precise assumptions for Theorem 1 and an explicit formula for ε(N,M,w) are given in Theorem 2 in Appendix C. For instance, Theorem 1 provides a bound on the speed of convergence for the consistency analysis in (Ahmad & Lin, 1976, Theorem 1).
2.4 ESTIMATING CONDITIONAL DIFFERENTIAL ENTROPY AND MUTUAL INFORMATION
Similar to (McAllester & Stratos, 2020), the proposed DE estimator can be used to estimate other information measures. In particular, we can use KNIFE to construct estimators of conditional DE and MI. When estimating the conditional DE and MI for a pair of random variables (X,Y ) ∼ p, we not only use Dx = {xn}Nn=1, but also the according i.i.d. samples Dy = {yn}Nn=1, where (xn, yn) are drawn according to p.
Conditional Differential Entropy. We estimate conditional DE h(X|Y ) by considering θ to be a parameterized function Θ(y) of y. Then all relations previously established naturally generalize and
p̂KNIFE(x|y; Θ) := p̂KNIFE(x; Θ(y)), ĥKNIFE(Dx|Dy; Θ) := − 1
N N∑ n=1 log p̂KNIFE(xn|yn; Θ). (8)
Naturally, minimization of (6) is now performed over the parameters of Θ. If Y is a continuous random variable, we use an artificial neural network Θ(y), taking y as its input. On the other hand, if Y ∈ Y is a discrete random variable, we have one parameter θ for each y ∈ Y , i.e., Θ = {θy}y∈Y and p̂KNIFE(x|y; Θ) = p̂KNIFE(x; Θ(y)) = p̂KNIFE(x;θy).
Mutual Information. To estimate the MI between random variables X and Y (either discrete or continuous), recall that MI can be written as I(X;Y ) = h(X) − h(X|Y ). Therefore, we use the marginal and conditional DE estimators (5) and (8) to build a KNIFE-based MI estimator
ÎKNIFE(Dx,Dy;θ,Θ) := ĥKNIFE(Dx;θ)− ĥKNIFE(Dx|Dy; Θ). (9)
3 EXPERIMENTS USING SYNTHETIC DATA
3.1 DIFFERENTIAL ENTROPY ESTIMATION
In this section we apply KNIFE for DE estimation, comparing it to (3), the method introduced in Schraudolph (2004), subsequently labeled “SCHRAU.”. It is worth to mention that we did not perform the Expectation Maximization algorithm, as suggested in (Schraudolph, 2004), but instead opted to use the same optimization technique as for KNIFE to facilitate a fair comparison.
3.1.1 GAUSSIAN DISTRIBUTION
As a sanity check, we test KNIFE on multivariate normal data in moderately high dimensions, comparing it to SCHRAU. and DOE, which we trained with the exact same parameters. We performed these experiments with d = 10 and d = 64 dimensional data. KNIFE yielded the lowest bias and variance in both cases, despite DOE being perfectly adapted to matching a multivariate Gaussian distribution. Additional details can be found in Appendix A.1.
In order to use a DE estimation primitive in a machine learning system, it must be able to adapt to a changing input distribution during training (R3). As already pointed out in Section 2.1, this is a severe limitation of SCHRAU., as re-drawing the kernel support E can be either impractical or at the very least requires a complete re-training of the entropy estimator. Whereas in (4), the kernel support a is trainable and it can thus adapt to a change of the input distribution. In order to showcase this ability, we utilize the approach of Cheng et al. (2020a) and successively decrease the entropy, observing how the estimator adapts. We perform this experiment with data of dimension d = 64 and repeatedly multiply the covariance matrix of the training vectors with a factor of a = 12 . The resulting entropy estimation is depicted in Figure 1. It is apparent that SCHRAU. suffers from a varying bias. The bias increases with decreasing variance, as the kernel support is fixed and cannot adapt as the variance of Dx shrinks. DOE is perfectly adapted to a single Gaussian distribution and performs similar to KNIFE.
3.1.2 TRIANGLE MIXTURE
KNIFE is able to cope with distributions that have multiple modes. While (3) is also capable of matching multi-modal distributions, DOE is unable to do so, as it approximates any distribution with a multivariate Gaussian. We illustrate this by matching a mixture of randomly drawn triangle distributions. The resulting estimated PDFs as well as the ground truth when estimating the entropy of a 1-dimensional mixture of triangles with 10 components can be observed in Figure 2 (left). With increasing dimension the difficulty of this estimation rises quickly as in d dimensions, the resulting PDF of independent c-component triangle mixtures has cd modes. To showcase the performance of KNIFE in this challenging task, we ran 10 training runs for DE estimation of 2-component triangle mixtures in 8 dimensions. An example training run is depicted in Figure 2 (right).
3.2 MUTUAL INFORMATION ESTIMATION
Multivariate Gauss We repeat the experiments in (Cheng et al., 2020a), stepping up the MI I(Xd;Y d) between d i.i.d. copies of joint normal random variables (X,Y ) by increasing their correlation coefficient, i.e., (X,Y ) are multivariate Gaussian with correlation coefficient ρi in the i-th epoch. A training run is depicted in the top of Figure 3. As in (Cheng et al., 2020a), we also repeat the experiment, applying a cubic transformation to Y . The estimation of MI between d i.i.d. copies of X and Y 3 can be observed in the middle row of Figure 3. The MI is unaffected by this bijective transformation. In Appendix A.3, the bias and variance are depicted separately.
Sum of Uniformly Distributed Variables In order to test the ability of KNIFE to adapt to distributions substantially different from the Gaussian kernel shape, we apply it in MI estimation of I(Xd;Y d) with uniformly distributed data. To this end, let X and E be centered, uniformly distributed random variables with E[X2] = E[E2] = 1 and define Y = ρiX+ √ 1− ρ2iE in the i-th epoch. One training run with d = 20 is shown in Figure 3 (bottom). Details about the source distribution as well as details of the experiments can be found in Appendix A.3.
4 EXPERIMENTS ON NATURAL DATA
In this section, we benchmark our proposed KNIFE-based MI estimator on three practical applications, spanning textual and visual data. We reproduce and compare our method to the most recent MI estimators including MINE (Belghazi et al., 2018), NWJ (Nguyen et al., 2010), InfoNCE (van den Oord et al., 2018), CLUB (Cheng et al., 2020a), and DOE (McAllester & Stratos, 2020). We do not explicitly include the SMILE estimator Song & Ermon (2019) in our comparison as it has the same gradient as NWJ.
Common notation: In all following applications, we will use Φψ : X → Z to denote an encoder, where X is the raw input space (i.e., texts or images), and Z denotes a lower dimensional continuous feature space. Additionally, we will use Cψ : Z → Y to denote a shallow classifier from the latent space Z to a discrete or continuous target space Y for classification or regression, respectively. We will use ψ to denote the parameters of both models, Φψ and Cψ . CE denotes the cross entropy loss.
4.1 INFORMATION BOTTLENECK FOR LANGUAGE MODEL FINETUNING
IB has recently been applied to fine-tune large-scale pretrained models (Mahabadi et al., 2021) such as BERT (Devlin et al., 2018) and aims at suppressing irrelevant features in order to reduce overfitting.
Problem statement. Given a textual input X ∈ X and a target label Y ∈ Y , the goal is to learn the encoder Φψ and classifier Cψ, such that Φψ(X) retains little information about X , while still producing discriminative features, allowing the prediction of Y . Thus, the loss of interest is:
L = λ · I(Φψ(X);X)︸ ︷︷ ︸ compression term − I(Φψ(X);Y )︸ ︷︷ ︸ downstream term , (10)
where λ controls the trade-off between the downstream and the compression terms.
Setup. Following Mahabadi et al. (2021) (relying on VUB), we work with the VIBERT model, which uses a Gaussian distribution as prior. Φψ is implemented as a stochastic encoder Φψ(X) = Z ∼ N (µψ(X),Σψ(X)). Details on the architecture of µψ and Σψ can be found in Appendix B. The classifier Cψ is composed of dense layers. To minimize L, the second part of the objective (10) is bounded using the variational bound from Barber & Agakov (2003). Since we use a Gaussian prior, h(Z|X) can be expressed in closed form.1 Thus, when using KNIFE, I(X;Z) = h(Z) − h(Z|X) can be estimated by using ĥKNIFE to estimate h(Z). We compare this KNIFE-based MI estimator with aforementioned MI estimators and the variational upper bound (VUB). For completeness, we also compare against a BERT model trained by direct minimization of a CE loss.
We closely follow the protocol of (Mahabadi et al., 2021) and work on the GLUE benchmark (Wang et al., 2018) originally composed of 5 datasets. However, following (Mahabadi et al., 2021), we choose to finetune neither on WNLI (Morgenstern & Ortiz, 2015) nor on CoLA (Warstadt et al., 2019) due to reported flaws in these datasets. The evaluation is carried out on the standard validation splits as the test splits are not available. Following standard practice (Liu et al., 2019; Yang et al., 2019), we report the accuracy and the F1 for MRPC, the accuracy for RTE and the Pearson and Spearman correlation coefficient for STS-B.
Results. Table 1 reports our results on the GLUE benchmark. We observe that KNIFE obtains the best results on all three datasets and the lowest variance on MRPC and STS-B. The use of a Gaussian prior in the stochastic encoder Φψ could explain the observed improvement of KNIFE-based estimation over MI-estimators such as CLUB, InfoNCE, MINE, DOE, or NWJ.
4.2 FAIR TEXTUAL CLASSIFICATION
In fair classification, we would like the model to take its decision without utilizing private information such as gender, age, or race. For this task, MI can be minimized to disentangle the output of the encoder Z and a private label S ∈ S (e.g., gender, age, or race).
1h(Z|X) = 1 2 ln |Σψ(X)|+ d2 ln(2πe), where d is the dimension of X and | · | denotes the determinant.
downstream task
+λ · I(Φψ(X);S)︸ ︷︷ ︸
disentangled
, (11)
where λ controls the trade-off between minimizing MI and CE loss. In this framework, a classifier is said to be fair or to achieve perfect privacy if no statistical information about S can be extracted from Φψ(X) by an adversarial classifier. Overall, a good model should achieve high accuracy on the main task (i.e., prediction of Y ) while removing information about the protected attribute S. This information is measured by training an offline classifier to recover the protected attribute S from Φψ(X).
Setup. We compute the second term of (11) with competing MI estimators, as well as the model from Elazar & Goldberg (2018), which will be referred to as “Adv”, as it utilizes an adversary to recover the private label from the latent representation Z. For KNIFE-based MI estimation, we use two DE estimators (as S is a binary label), following the approach outlined in Section 2.4. All derivations are detailed in Appendix B. We follow the experimental setting from Elazar & Goldberg (2018); Barrett et al. (2019) and use two datasets from the DIAL corpus (Blodgett et al., 2016) (over 50 million tweets) where the protected attribute S is the race and the main labels are sentiment or mention labels. The mention label indicates whether a tweet is conversational or not. We follow the official split using 160 000 tweets for training and two additional sets composed of 10 000 tweets each for development and testing. In all cases, the labels S and Y are binary and balanced, thus a random guess corresponds to 50% accuracy.
Results. Figure 4 gathers results on the fair classification task. The upper dashed lines represent the (private and main) task accuracies when training a model with only the CE loss (case λ = 0 in (11)). This shows that the learned encoding Φψ(X) contains information about the protected attribute, when training is only performed for the main task. On both the sentiment and mention task, we observe that a KNIFE-based estimator can achieve perfect privacy (see Figures 4b and 4d) with nearly no accuracy loss in the main task (see Figures 4a and 4c). The other MI estimators exhibit different behavior. For sentiment labels, most MI estimators fail to reach perfect privacy (CLUB, NWJ, DOE, and Adv) while others (InfoNCE) achieve perfect privacy while degrading the main task accuracy (10% loss on main accuracy). For mention labels, CLUB can also reach perfect privacy with almost no degradation of the accuracy of the main task. Overall, it is worth noting that KNIFE-based MI estimation enables better control of the degree of disentanglement than the reported baselines.
4.3 UNSUPERVISED DOMAIN ADAPTATION
In unsupervised domain adaptation, the goal is to transfer knowledge from the source domain (S) with a potentially large number of labeled examples to a target domain (T ), where only unlabeled examples are available.
Problem Statement. The learner is given access to labeled images from a source domain (xs, y) ∼ (XS , Y ) ∈ XS × Y and unlabeled images from a target domain xt ∼ XT ∈ XT . The goal is to
learn a classification model {Φψ, Cψ} that generalizes well to the target domain. Training models on the supervised source data only results in domain-specific latent representations Φψ(X) leading to poor generalization (when X is chosen randomly from {XS , XT }). In order to make the latent representations as domain-agnostic as possible, we follow the information-theoretic method proposed by Gholami et al. (2020), and used in Cheng et al. (2020a). The idea is to learn an additional binary model {Φdν , Cdν}, whose goal it is to guess the domain D ∈ {0, 1} of X . The latent representation learned by Φdν will therefore contain all the domain-specific information that we would like the main encoder Φψ to discard. In other words, we would like Φψ(X) and Φdν(X) to be completely disentangled, which naturally corresponds to the minimization of I(Φψ(X); Φdν(X)). Concretely, the domain classifier is trained to minimize the CE between domain labels D and its own predictions, whereas the main classifier is trained to properly classify support samples while minimizing the MI between Φψ(X) and Φdν(X). Using f d ν := C d ν ◦ Φdν and fψ := Cψ ◦ Φψ , the objectives are
min ν CE(D; fdν (X)) and min ψ CE(Y ; fψ(XS)) + λ · I(Φψ(X); Φdν(X)). (12)
Setup. The different MI estimators are compared based on their ability to guide training by estimating I(Φψ(X); Φdν(X)) in (12). We follow the setup of Cheng et al. (2020a) as closely as possible, and consider a total of 6 source/target scenarios formed with MNIST (LeCun & Cortes, 2010), MNIST-M (Ganin et al., 2016), SVHN (Netzer et al., 2011), CIFAR-10 (Krizhevsky et al., 2009), and STL-10 (Coates et al., 2011) datasets. We reproduce all methods and allocate the same budget for hyper-parameter tuning to every method. The exhaustive list of hyper-parameters can be found in Appendix B.
Results. Results are presented in Table 2. The KNIFE-based estimator is able to outperform MI estimators in this challenging scenario where both Φψ(X) and Φdν(X) are continuous.
5 CONCLUDING REMARKS
We introduced KNIFE, a fully learnable, differentiable kernel-based estimator of differential entropy, designed for deep learning applications. We constructed a mutual information estimator based on KNIFE and showcased several applications. KNIFE is a general purpose estimator and does not require any special properties of the learning problem. It can thus be incorporated as part of any training objective, where differential entropy or mutual information estimation is desired. In the case of mutual information, one random variable may even be discrete.
Despite the fundamental challenges in the problem of differential entropy estimation, beyond limitations arising from the use of a finite number of samples, KNIFE has demonstrated promising empirical results in various representation learning tasks.
Future work will focus on improving the confidence bounds given in Theorem 1. In particular, tailoring them towards KNIFE using tools from (Birge & Massart, 1995; Singh & Poczos, 2014). Another potential extension is direct estimation of the gradient of entropy, when p̂KNIFE(x;θ) has been learned (Mohamed et al., 2020; Song et al., 2020). This could be applied after the learning phase of KNIFE and is left for future work.
APPENDIX
A EXPERIMENTAL DETAILS OF EXPERIMENTS WITH SYNTHETIC DATA
Implementation of KNIFE in PyTorch (Paszke et al., 2019) is rather straightforward. The constraint on the weights u can be satisfied by applying a softmax transformation. The covariance matrices were parameterized by the lower-triangular factor in the Cholesky decomposition of the precision matrices, guaranteeing the definiteness constraint to be satisfied.
A.1 DIFFERENTIAL ENTROPY ESTIMATION OF GAUSSIAN DATA
In Section 3.1.1, the estimation of the entropy h(X) = d2 log 2πe for X ∼ N (0, Id) was performed with the hyperparameters given in Table 3. The mean error and its empirical standard deviation are reported in Table 5 over 20 runs, where an independently drawn evaluation set with the same size as the training set is used. At d = 10 we have the entropy h = d2 log 2πe = 14.19, while for the higher dimension, d = 64 we find h = 90.81.
In the experiment depicted in Figure 1, entropy is decreased after every epoch by letting Xi ∼ N (0, aiId), where i = 0, . . . , 4 is the epoch index. That is, Xi = √ aiGd, where G is a standard normal random variable, resulting in an decrease of the DE by ∆ = −d2 log a ≈ 22.18 for a = 1 2 with every epoch. We start at h(X0) = d2 log 2πe ≈ 90.81 and successively decrease until h(X4) = h(X0) + 4∆ ≈ 2.1. Additional parameters can be found in Table 4.
Computational Resources. Training was performed on an NVidia V100 GPU. Taken together, training for the first experiments of entropy estimation in dimensions d = 10, 64, as well as the experiment depicted in Figure 1 used GPU time of less than 5 minutes.
A.2 DIFFERENTIAL ENTROPY ESTIMATION OF TRIANGLE MIXTURES
In Section 3.1.2, we perform an estimation of the entropy of c-component triangle mixture distributions. The PDF of such a c-component triangle-mixture, is given by
p(x) = c∑ i=1 wiΛsi ( x− i− 1 2 ) , (13)
where Λs(x) := 1s max{0, 2 − 4s|x|} is a centered triangle PDF with width s > 0. The scales s = (s1, . . . , sc) and weights w = (w1, . . . , wc) satisfy 0 < si, wi < 1 and ∑c i=1 wi = 1. Before the experiment, we choose w uniformly at random from the c-probability simplex and the scales are chosen uniformly at random in [0.1, 1.0]. An example for c = 10 is the true PDF depicted in Figure 2
(left). For d > 1, we perform the estimation on d i.i.d. copies. Note that the triangle mixture with c components in d-dimensional space has cd modes, i.e., the support can be partitioned into cd disjoint components.
The parameters of the experiment yielding Figure 2 (left) are given in Table 6, while the details of the experiment depicted in Figure 2 (right) can be found in Table 7. In the latter experiment, over ten runs, entropy was estimated to an accuracy of 1.6563± 0.8528 by KNIFE, accurate to 2.4445± 0.5439 using (3) and with an accuracy of 7.1070± 2.7984 by DOE. This is the mean absolute error and its empirical standard deviation over all 10 runs, where the evaluation set was drawn independently from the training set and has the same size as the training set.
Computational Resources. Training was performed on an NVidia V100 GPU. Training in d = 1 dimension, that resulted in Figure 2 (left) can be performed in seconds, while all training required for producing Figure 2 (right) used approximately 1.5 hours of GPU time.
A.3 MUTUAL INFORMATION ESTIMATION
In Section 3.2, we estimate I(Xd;Y d) and I(Xd; (Y 3)d) where (X,Y ) are multivariate correlated Gaussian distributions with correlation coefficient ρi in the i-th epoch. Subsequently, we estimate I(Xd;Y d) where X,E ∼ U [− √ 3, √ 3] are independent and Y is given by Y = ρiX + √
1− ρ2iE. In both cases, ρi is chosen such that I(Xd;Y d) = 2i in the i-th epoch.
All neural networks are randomly initialized. The bias, variance, and MSE during training as a function of the MI, can be observed in Figure 5.
The estimation is performed in 10 runs, randomly choosing the training meta-parameters as proposed by McAllester & Stratos (2020). In Figure 3 (bottom), we present the best run for each method, selected by distance from the true MI at the end of training. The bias, variance, and MSE during training, as a function of the MI, can be observed in Figure 6. Details about the source distribution as well as details of the experiments can be found in Table 8. During experimentation it turned out to be
beneficial to train the parameters Θ and θ in (9) separately and substantially increase the learning rate for the training of θ. Thus, we increase the learning rate for the training of θ by a factor of 103.
Model Architecture for Θ. We utilize the feed-forward architecture, also used in McAllester & Stratos (2020). It is a simple architecture with two linear layers, one hidden layer using tanh activation, immediately followed by an output layer. The number of neurons in the hidden layer is a meta-parameter selected randomly from {64, 128, 256} for each training run. Three models with this architecture are used for the three parameters (A,a,u), as described by (4), where only the output dimension is changed to fit the parameter dimension.
Computational Resources. Training was performed, using about 6 hours of GPU time on an NVidia V100 GPU to carry out the experiment depicted in Figure 3 (bottom).
B EXPERIMENTAL DETAILS OF EXPERIMENTS ON NATURAL DATA
B.1 ON THE PARAMETER UPDATE
In Section 4, we rely on two different types of models: pretrained (e.g., fine tuning with VIBERT) and randomly initialized (e.g., in fair classification and domain adaptation). When working with randomly initialized networks the parameters are updated. However, it is worth noting that in the literature the pretrained model parameters (i.e. ψ) are not always updated (see Ravfogel et al. (2020)). In our experiments: (i) We always update the parameters (even for pretrained models), and (ii) we did not change the way the parameters were updated in concurrent works (to ensure fair comparison). Specifically,
• for language model finetuning (Appendix B.2), we followed Mahabadi et al. (2021) and did a joint update;
• for the fair classification task (Appendix B.3), we followed common practice and used the algorithm described in Algorithm 1 which rely on an alternated update;
• for the domain adaptation task (Appendix B.4), we followed common practice and used a joint method.
B.2 INFORMATION BOTTLENECK FOR LANGUAGE MODEL FINETUNING
For this experiment we follow the experimental setting introduced in Mahabadi et al. (2021) and work with the GLUE data2.
Model Architecture. We report in Table 9, the multilayer perceptron (MLP) used to compute the compressed sentence representations produced by BERT. Variance and Mean MLP networks are composed of fully connected layers.
2see https://gluebenchmark.com/faq
Algorithm 1 Disentanglement using a MI-based regularizer 1: INPUT Labelled training set D = {(xj , sj , yj)∀j ∈ [n+ 1, N ]}; independent set of samples E ; θ parameters KNIFE; ψ parameters of network.
2: INITIALIZE parameters θ, ψ 3: OPTIMIZATION 4: while (θ, ψ) not converged do 5: for i ∈ [1,Unroll] do . Learning Step for KNIFE 6: Sample a batch B from E 7: Update θ using ((9)). 8: end for 9: Sample a batch B′ from D
10: Update θ with B′ ((11)). 11: end while 12: OUTPUT Encoder and classifier weights ψ
Table 10: Experimental details on Information Bottleneck.
Parameter Value
Learning Rate See Appendix B.2 Optimizer AdamW Warmup Steps 0.0 Dropout 0.0
Batch Size 32
Model Training. For model training, all models are trained for 6 epochs and we use early stopping (best model is selected on validation set error). For IB, λ is selected in {10−4, 10−5, 10−6} and K is selected in {144, 192, 288, 384}. We follow (Alemi et al., 2016) where the posterior is averaged over 5 samples and a linear annealing schedule is used for λ. Additional hyper-parameters are reported in Table 10.
Dataset Statistics. Table 11 reports the statistics of the dataset used in our finetuning experiment.
Computational Resources. For all these experiments we rely on NVidia-P100 with 16GB of RAM. To complete the full grid-search on 10 seeds and on the three datasets, approximately 1.5k hours are required.
B.3 FAIR TEXTUAL CLASSIFICATION
In this section, we gather the experimental details for the textual fair classification task.
B.3.1 DETAILS OF THE KNIFE-BASED ESTIMATOR
In this experiment, we estimate the MI between a continuous random variable, namely Z = Φψ(X), and a discrete variable, denoted by S ∈ S = {1, 2, . . . , |S|}. We follow the strategy outlined in Section 2.4 for estimating the conditional DE h(Z|S). However, we will reuse the estimate of the conditional PDF p̂(z|s; Θ) to compute an estimate of the DE as
h(Z) ≈ − 1 N N∑ n=1 log (∑ s∈S p̂KNIFE(zn|s; Θ)p̂(s) ) , (14)
where p̂(s) = 1N |{n : sn = s}| is used to indicate the empirical distribution of S in the training set Ds.3 In our experiments, with |S| = 2, we found that estimating the DE h(Z) based on the KNIFE estimator learnt for h(Z|S) increases the stability of training. We adopted the same strategy for DOE.
B.3.2 EXPERIMENTAL DETAILS
Model Architecture. For the encoder, we use a bidirectionnal GRU with two layers with hidden and input dimension set to 128. We use LeakyReLU as the activation function. The classification head is composed of fully connected layers of input dimension 256. We use a learning rate of 0.0001 for AdamW. The dropout rate is set to 0.2. The number of warmup steps is set to 1000.
3As we work with balanced batches, we will have p̂(s) = 1|S| .
Computational Resources. For all these experiments, we rely on NVIDIA-P100 with 16GB of RAM. Each model is trained for 30k steps. The model with the lowest MI is selected. The training of a single network takes around 3 hours.
B.4 UNSUPERVISED DOMAIN ADAPTATION
We follow the experimental setup given in Cheng et al. (2020a) as closely as possible, i.e., we pick hyperparameters given in the paper, or if not provided, those set in the code:4
Model Training. We use Adam optimizer for all modules with a learning rate of 0.001. Batch size is set to 128. We set the weighting parameter λ = 0.1. The original code of Cheng et al. (2020a) uses 15 000 training iterations, but we found most methods had not properly converged at this stage, and hence use 25 000 iterations instead. Similar to other experiments, we set the kernel size M = 128.
Model Architecture. Table 12 summarizes the architectures used for the different modules. For the MI network of each method, the best configuration, based on the validation set of the first task MNIST→MNIST-M, is chosen among 4 configurations: with or without LayerNorm and with ReLU or tanh activation.
Computational Resources. For these experiments, we used a cluster of NVIDIA-V100 with 16GB of RAM. Each training (i.e., 25k iterations) on a single task requires on average 2 hours. Given that we have 6 tasks, and repeat the training for 3 different seeds, on average 36 hours computation time is required for each method.
C BOUNDING THE ERROR
In the following, fix L > 0 and let PL be the set of L-Lipschitz PDFs supported5 on X := [0, 1]d, i.e., ∫ X p(x) dx = 1, and
∀x, y ∈ Rd : |p(x)− p(y)| ≤ L‖x− y‖ (15) for p ∈ PL, where6 ‖x‖ := ∑ k |xk|.
Assume p ∈ PL and let κ be a PDF supported on X . In order to show that estimation of h(X) is achievable, we use a standard Parzen-Rosenblatt estimator p̂(x;w) := 1
Mwd ∑M m=1 κ (x−X′m w ) , as
in (2). The entropy estimate is then defined by the empirical average
ĥ(Dx;w) := − 1
N N∑ n=1 log p̂(Xn;w). (16)
Further, define the following quantities, which are assumed to be finite:
pmax := max{p(x) : x ∈ X}, (17)
C1 :=
∫ p(x) log2 p(x)dx, (18)
C2 := L
∫ ‖u‖κ(u)du, (19)
Kmax := max{κ(x) : x ∈ X}. (20) Note that it is easily seen that pmax ≤ L2 and C1 ≤ max { pmax log 2 pmax, 4e −2} by our assumptions. The requirement C2,Kmax <∞ represents a mild condition on the kernel function κ. We can now show the following.
4https://github.com/Linear95/CLUB/tree/master/MI_DA. 5Any known compact support suffices. An affine transformation then yields X = [0, 1]d, while possibly resulting in a different Lipschitz constant. 6The `1 norm is chosen to facilitate subsequent computations. By the equivalence of norms on Rd, any norm suffices.
Convolution sequence
Noisy downsampling
Theorem 2. With probability greater than 1− δ we have
|h(X)− ĥ(Dx;w)| ≤ − log 1− 3NKmax wdδ √ log 6Nδ 2M − 3NC2w δ +√3C1 Nδ , (21)
if the expression in the logarithm is positive.
In particular, the estimation error approaches zero asN →∞ ifw = w(N)→ 0,M = M(N)→∞ are chosen such that
Nw → 0, (22) N2 logN
w2dM → 0. (23)
We prove Theorem 2 in several Lemmas.
Lemma 3. Fix δ > 0 and x0 ∈ X . Then, with probability greater than 1− δ,
|p(x0)− p̂(x0)| ≤ Kmax wd √ log 2δ 2M + C2w. (24)
Proof. First, we can show that
|E[p̂(x0)]− p(x0)| = ∣∣∣∣∣ 1Mwd M∑ m=1 ∫ κ ( x0 − x w ) p(x)dx− p(x0) ∣∣∣∣∣ (25) = ∣∣∣∣ 1wd ∫ κ ( x0 − x w ) p(x)dx− p(x0)
∣∣∣∣ (26) =
∣∣∣∣∫ κ (u) p(x0 − wu)du− p(x0)∣∣∣∣ (27) =
∣∣∣∣∫ κ (u) [p(x0 − wu)− p(x0)]du∣∣∣∣ (28) ≤ ∫ κ (u) |p(x0 − wu)− p(x0)|du (29)
≤ ∫ κ (u)Lw‖u‖du (30)
= wC2. (31)
Next, note that
|E[p̂(x0)]− p̂(x0)| ≤ Kmax wd √ log 2δ 2M
(32)
holds with probability greater than 1− δ as the requirements of McDiarmid’s inequality (Paninski, 2003, Sec. 3) are satisfied with cj = KmaxMwd and thus P{|E[p̂(x0)]− p̂(x0)| ≥ ε} ≤ δ with
ε = Kmax wd √ log 2δ 2M . (33)
Combining (31) and (32) gives (24).
Lemma 4. For any continuous random variable X supported on X and a ≥ 0, we have
P{p(X) ≤ a} ≤ a. (34)
Proof. We apply Markov’s inequality to the random variable Y = 1p(X) and observe that
P{p(X) ≤ a} = P{Y ≥ a−1} ≤ vol(X )a = a. (35)
Lemma 5. If x > 0, y ≥ a > 0, 0 < a < 1, and |x− y| ≤ δ < a, then
| log x− log y| ≤ log a a− δ
= − log (
1− δ a
) . (36)
Proof. Case x ≥ y. We can write y = a+ b and x = y+ c = a+ b+ c for b ≥ 0 and 0 ≤ c ≤ δ < a.∣∣∣∣log xy ∣∣∣∣ = log(1 + ca+ b ) (37)
≤ log ( 1 + c
a
) ≤ log ( 1 + δ
a
) . (38)
Furthermore,
log
( a
a− δ
) − log ( 1 + δ
a
) = log
1
(a+ δ)(a− δ) (39)
= log 1
a2 − δ2 (40)
≥ log 1 a2 = −2 log a > 0. (41)
Case x < y. Here, we can write y = a+ b and x = y − c = a+ b− c for b ≥ 0 and 0 ≤ c ≤ δ < a.∣∣∣∣log xy ∣∣∣∣ = log yx (42)
= log
( a+ b
a+ b− c
) (43)
≤ log ( a
a− c
) (44)
≤ log ( a
a− δ
) = − log ( 1− δ
a
) . (45)
Proof of Theorem 2. We apply Lemma 3 N times and use the union bound to show that with probability greater than 1− δ3 we have for every n ∈ [N ]
|p(Xn)− p̂(Xn)| ≤ Kmax wd
√ log 6Nδ
2M + C2w. (46)
Similarly, by Lemma 4, we have with probability greater than 1− δ3 that
p(Xn) ≥ δ
3N (47)
for all n ∈ [N ].
Again by the union bound, we have that with probability greater than 1− 2δ3 both (46) and (47) hold for all n ∈ [N ], and thus, by Lemma 5, we obtain∣∣∣∣∣ĥ(Dx;w) + 1N N∑ n=1 log p(Xn) ∣∣∣∣∣ = ∣∣∣∣∣ 1N N∑ n=1 log p(Xn) p̂(Xn)
∣∣∣∣∣ (48) ≤ − log 1− Kmaxwd √ log 6Nδ 2M + C2w δ
3N (49) = − log 1− 3NKmax wdδ √ log 6Nδ 2M − 3NC2w δ , (50)
provided the argument in the logarithm is positive. Finally, we have the upper bound on the variance
E (h(X) + 1 N N∑ n=1 log p(Xn) )2 = 1 N2 N∑ n=1 E[(h(X) + log p(X))2] (51)
= 1
N (E[log2 p(X)]− h(X)2) (52) ≤ 1 N C1 (53)
and apply Chebychev’s inequality, showing that with probability greater than 1− δ3 ,∣∣∣∣∣h(X) + 1N N∑ n=1 log p(Xn) ∣∣∣∣∣ ≤ √ 3C1 Nδ . (54)
The union bound and the triangle inequality applied to (50) and (54) yields the desired result.
D LIBRARIES USED
For our experiments, we built upon code from the following sources.
• VIBERT (Mahabadi et al., 2021) at github.com/rabeehk/vibert. • TRANSFORMERS (Wolf et al., 2019) at github.com/huggingface/transformers. • DOE (McAllester & Stratos, 2020) at github.com/karlstratos/doe. • SMILE (Song & Ermon, 2019) at github.com/ermongroup/smile-mi-estimator. • InfoNCE, MINE, NWJ, CLUB (Cheng et al., 2020a) at github.com/Linear95/CLUB. | 1. What is the focus of the paper regarding density function estimation?
2. What are the strengths of the proposed method, particularly in terms of its adaptation to data distribution shifts?
3. What are the weaknesses of the paper regarding the technical significance of the proposed approach?
4. How does the reviewer question the updating process of the parameter θ and its potential impact on mode collapse?
5. How did the author address the reviewer's concerns in the revised draft? | Summary Of The Paper
Review | Summary Of The Paper
This paper studies differentiable proxy estimators for density function, which is in turn used to compute various information metrics such as (conditional) entropy and mutual information. The main contribution of this paper is that it generalizes previous kernel-based density estimators by parameterizing the anchors and mixture probability of kernels. The advantage of the resulting method is that it 1) has increased capacity 2) it can adapt to input data distribution shift. The author provides convergence of the resulting entropy, as well as providing extensive empirical studies on synthetic and real data, such as BERT+text classification and Unsupervised Domain Adaptation.
Review
[Strength]
Empirical evidence demonstrates the effectiveness of the proposed method in terms of its estimation error and adaptation to underlying data distribution shift. The results seem fairly consistent, at least for the tasks and baselines the authors included.
[Weakness]
The technical significance of the proposed method seems incremental: The Schraudolph estimator proposed before already includes the covariances as learnable parameters, and there does not seem to be many technical challenges of making the anchor and mixture probability learnable as well. But I'll take the author's defense on this into consideration.
[Other comments]
θ
=
(
A
,
a
,
u
)
can be viewed as a parameter for the loss function How is
θ
updated, say for the VIBERT example? Is it updated together with the model parameters or in a bilevel fashion (alternating)?. If it is the former, how would it prevent mode collapse? Usually, in the AutoML literature where one wants to learn a surrogate loss function, people would alternatively update loss function parameters and the model parameters to prevent mode collapse or overfitting.
---- post rebuttal update The author's response and the revised draft addressed my questions. The background information provided in the statement of novelty improves my view on the matter, though I hold to the opinion that extending the Schraudolph estimator does not seem to be particularly challenging. That been said, the proposed method is well motivated and its effectiveness is sufficiently backed by empirical evaluations. Overall it is a paper of quality. I raise the tech contribution score to 3 and confidence to 3 and am inclined towards acceptance. |
ICLR | Title
Prediction Poisoning: Towards Defenses Against DNN Model Stealing Attacks
Abstract
High-performance Deep Neural Networks (DNNs) are increasingly deployed in many real-world applications e.g., cloud prediction APIs. Recent advances in model functionality stealing attacks via black-box access (i.e., inputs in, predictions out) threaten the business model of such applications, which require a lot of time, money, and effort to develop. Existing defenses take a passive role against stealing attacks, such as by truncating predicted information. We find such passive defenses ineffective against DNN stealing attacks. In this paper, we propose the first defense which actively perturbs predictions targeted at poisoning the training objective of the attacker. We find our defense effective across a wide range of challenging datasets and DNN model stealing attacks, and additionally outperforms existing defenses. Our defense is the first that can withstand highly accurate model stealing attacks for tens of thousands of queries, amplifying the attacker’s error rate up to a factor of 85×with minimal impact on the utility for benign users.
1 INTRODUCTION
Effectiveness of state-of-the-art DNN models at a variety of predictive tasks has encouraged their usage in a variety of real-world applications e.g., home assistants, autonomous vehicles, commercial cloud APIs. Models in such applications are valuable intellectual property of their creators, as developing them for commercial use is a product of intense labour and monetary effort. Hence, it is vital to preemptively identify and control threats from an adversarial lens focused at such models. In this work we address model stealing, which involves an adversary attempting to counterfeit the functionality of a target victim ML model by exploiting black-box access (query inputs in, posterior predictions out).
Stealing attacks dates back to Lowd & Meek (2005), who addressed reverse-engineering linear spam classification models. Recent literature predominantly focus on DNNs (specifically CNN image classifiers), and are shown to be highly effective (Tramèr et al., 2016) on complex models (Orekondy et al., 2019), even without knowledge of the victim’s architecture (Papernot et al., 2017b) nor the training data distribution. The attacks have also been shown to be highly effective at replicating pay-per-query image prediction APIs, for as little as $30 (Orekondy et al., 2019).
Defending against stealing attacks however has received little attention and is lacking. Existing defense strategies aim to either detect stealing query patterns (Juuti et al., 2019), or degrade quality of predicted posterior via perturbation. Since detection makes strong assumptions on the attacker’s query distribution (e.g., small L2 distances between successive queries), our focus is on the more popular perturbation-based defenses. A common theme among such defenses is accuracypreserving posterior perturbation: the posterior distribution is manipulated while retaining the top-1 label. For instance, rounding decimals (Tramèr et al., 2016), revealing only high-confidence predictions (Orekondy et al., 2019), and introducing ambiguity at the tail end of the posterior distribution (Lee et al., 2018). Such strategies benefit from preserving the accuracy metric of the defender. However, in line with previous works (Tramèr et al., 2016; Orekondy et al., 2019; Lee et al., 2018), we find models can be effectively stolen using just the top-1 predicted label returned by the black-box. Specifically, in many cases we observe <1% difference between attacks that use the full range of
posteriors (blue line in Fig. 1) to train stolen models and the top-1 label (orange line) alone. In this paper, we work towards effective defenses (red line in Fig. 1) against DNN stealing attacks with minimal impact to defender’s accuracy.
The main insight to our approach is that unlike a benign user, a model stealing attacker additionally uses the predictions to train a replica model. By introducing controlled perturbations to predictions, our approach targets poisoning the training objective (see Fig. 2). Our approach allows for a utility-preserving defense, as well as trading-off a marginal utility cost to significantly degrade attacker’s performance. As a practical benefit, the defense involves a single hyperparameter (perturbation utility budget) and can be used with minimal overhead to any classification model without retraining or modifications.
We rigorously evaluate our approach by defending six victim models, against four recent and effective DNN stealing attack strategies (Papernot et al., 2017b; Juuti et al., 2019; Orekondy et al., 2019). Our defense consistently mitigates all stealing attacks and further shows improvements over multiple baselines. In particular, we find our defenses degrades the attacker’s query sample efficiency by 1-2 orders of magnitude. Our approach significantly reduces the attacker’s performance (e.g., 30-53% reduction on MNIST and 13- 28% on CUB200) at a marginal cost (1-2%) to defender’s test accuracy. Furthermore, our approach can achieve the same level of mitigation as baseline defenses, but by introducing significantly lesser perturbation.
Contributions. (i) We propose the first utility-constrained defense against DNN model stealing attacks; (ii) We present the first active defense which poisons the attacker’s training objective by introducing bounded perturbations; and (iii) Through extensive experiments, we find our approach consistently mitigate various attacks and additionally outperform baselines.
2 RELATED LITERATURE
Model stealing attacks (also referred to as ‘extraction’ or ‘reverse-engineering’) in literature aim to infer hyperparameters (Oh et al., 2018; Wang & Gong, 2018), recover exact parameters (Lowd & Meek, 2005; Tramèr et al., 2016; Milli et al., 2018), or extract the functionality (Correia-Silva et al., 2018; Orekondy et al., 2019) of a target black-box ML model. In some cases, the extracted model information is optionally used to perform evasion attacks (Lowd & Meek, 2005; Nelson et al., 2010; Papernot et al., 2017b). The focus of our work is model functionality stealing, where the attacker’s yardstick is test-set accuracy of the stolen model. Initial works on stealing simple linear models (Lowd & Meek, 2005) have been recently succeeded by attacks shown to be effective on complex CNNs (Papernot et al., 2017b; Correia-Silva et al., 2018; Orekondy et al., 2019) (see Appendix B for an exhaustive list). In this work, we works towards defenses targeting the latter line of DNN model stealing attacks.
Since ML models are often deployed in untrusted environments, a long line of work exists on guaranteeing certain (often orthogonal) properties to safeguard against malicious users. The properties include security (e.g., robustness towards adversarial evasion attacks (Biggio et al., 2013; Goodfellow et al., 2014; Madry et al., 2018)) and integrity (e.g., running in untrusted environments (Tramer & Boneh, 2019)). To prevent leakage of private attributes (e.g., identities) specific to training data in the resulting ML model, differential privacy (DP) methods (Dwork et al., 2014) introduce randomization during training (Abadi et al., 2016; Papernot et al., 2017a). In contrast, our defense objective is to provide confidentiality and protect the functionality (intellectual property) of the ML model against illicit duplication.
Model stealing defenses are limited. Existing works (which is primarily in multiclass classification settings) aim to either detect stealing attacks (Juuti et al., 2019; Kesarwani et al., 2018; Nelson et al., 2009; Zheng et al., 2019) or perturb the posterior prediction. We focus on the latter since detection involves making strong assumptions on adversarial query patterns. Perturbation-based defenses are predominantly non-randomized and accuracy-preserving (i.e., top-1 label is unchanged). Approaches include revealing probabilities only of confident classes (Orekondy et al., 2019), rounding probabilities (Tramèr et al., 2016), or introducing ambiguity in posteriors (Lee et al., 2018). None of the existing defenses claim to mitigate model stealing, but rather they only marginally delay the attack by increasing the number of queries. Our work focuses on presenting an effective defense, significantly decreasing the attacker’s query sample efficiency within a principled utility-constrained framework.
3 PRELIMINARIES
Model Functionality Stealing. Model stealing attacks are cast as an interaction between two parties: a victim/defender V (‘teacher’ model) and an attacker A (‘student’ model). The only means of communication between the parties are via black-box queries: attacker queries inputs x ∈ X and defender returns a posterior probability distribution y ∈ ∆K = P (y|x) = FV (x), where ∆K = {y 0,1Ty = 1} is the probability simplex over K classes (we use K instead of K − 1 for notational convenience). The attack occurs in two (sometimes overlapping) phases: (i) querying: the attacker uses the black-box as an oracle labeler on a set of inputs to construct a ‘transfer set’ of input-prediction pairs Dtransfer = {(xi,yi)}Bi=1; and (ii) training: the attacker trains a model FA to minimize the empirical risk on Dtransfer. The end-goal of the attacker is to maximize accuracy on a held-out test-set (considered the same as that of the victim for evaluation purposes).
Knowledge-limited Attacker. In model stealing, attackers justifiably lack complete knowledge of the victim model FV . Of specific interest are the model architecture and the input data distribution to train the victim model PV (X) that are not known to the attacker. Since prior work (Hinton et al., 2015; Papernot et al., 2016; Orekondy et al., 2019) indicates functionality largely transfers across architecture choices, we now focus on the query data used by the attacker. Existing attacks can be broadly categorized based on inputs {x ∼ PA(X)} used to query the black-box: (a) independent distribution: (Tramèr et al., 2016; Correia-Silva et al., 2018; Orekondy et al., 2019) samples inputs from some distribution (e.g., ImageNet for images, uniform noise) independent to input data used to train the victim model; and (b) synthetic set: (Papernot et al., 2017b; Juuti et al., 2019) augment a limited set of seed data by adaptively querying perturbations (e.g., using FGSM) of existing inputs. We address both attack categories in our paper.
Defense Objectives. We perturb predictions in a controlled setting: ỹ = F δV (x) = y + δ s.t. ỹ,y ∈ ∆K . The defender has two (seemingly conflicting) objectives: (i) utility: such that perturbed predictions remain useful to a benign user. We consider two utility measures: (a) Acc(F δV ,Dtest): accuracy of defended model on test examples; and (b) dist(y, ỹ) = ||y− ỹ||p = to measure perturbation. (ii) non-replicability: to reduce the test accuracy of an attacker (denoted as Acc(FA,Dtest)) who exploits the predictions to train a replica FA on Dtransfer. For consistency, we evaluate both the defender’s and attacker’s stolen model accuracies on the same set of test examples Dtest.
Defender’s Assumptions. We closely mimic an assumption-free scenario similar to existing perturbation-based defenses. The scenario entails the knowledge-limited defender: (a) unaware whether a query is malicious or benign; (b) lacking prior knowledge of the strategy used by an attacker; and (c) perturbing each prediction independently (hence circumventing Sybil attacks). For added rigor, we also study attacker’s countermeasures to our defense in Section 5.
4 APPROACH: MAXIMIZING ANGULAR DEVIATION BETWEEN GRADIENTS
Motivation: Targeting First-order Approximations. We identify that the attacker eventually optimizes parameters of a stolen model F (·;w) (we drop the subscript ·A for readability) to minimize the loss on training examples {(xi, ỹi)}. Common to a majority of optimization algorithms is estimating the first-order approximation of the empirical loss, by computing the gradient of the loss
w.r.t. the model parameters w ∈ RD: u = −∇wL(F (x;w),y) (1)
Maximizing Angular Deviation (MAD). The core idea of our approach is to perturb the posterior probabilities y which results in an adversarial gradient signal that maximally deviates (see Fig. 2) from the original gradient (Eq. 1). More formally, we add targeted noise to the posteriors which results in a gradient direction:
a = −∇wL(F (x;w), ỹ) (2) to maximize the angular deviation between the original and the poisoned gradient signals:
max a 2(1− cos∠(a,u)) = max â ||â− û||22 (â = a/||a||2, û = u/||u||2) (3)
Given that the attacker model is trained to match the posterior predictions, such as by minimizing the cross-entropy loss L(y, ỹ) = − ∑ k ỹk log yk we rewrite Equation (2) as:
a = −∇wL(F (x;w), ỹ) = ∇w ∑ k ỹk logF (x;w)k = ∑ k ỹk∇w logF (x;w)k = GT ỹ
where G ∈ RK×D represents the Jacobian over log-likelihood predictions F (x;w) over K classes w.r.t. parameters w ∈ RD. By similarly rewriting Equation (1), substituting them in Equation (3) and including the constraints, we arrive at our poisoning objective (Eq. 4-7) of our approach which we refer to as MAD. We can optionally enforce preserving accuracy of poisoned prediction via constraint (8), which will be discussed shortly.
max ỹ ∥∥∥∥ GT ỹ||GT ỹ||2 − G Ty ||GTy||2 ∥∥∥∥2 2
(= H(ỹ)) (4)
where G = ∇w logF (x;w) (G ∈ RK×D) (5) s.t ỹ ∈ ∆K (Simplex constraint) (6)
dist(y, ỹ) ≤ (Utility constraint) (7) arg max
k ỹk = arg max k yk (For variant MAD-argmax) (8)
The above presents a challenge of black-box optimization problem for the defense since the defender justifiably lacks access to the attacker model F (Eq. 5). Apart from addressing this challenge in the next few paragraphs, we also discuss (a) solving a non-standard and non-convex constrained maximization objective; and (b) preserving accuracy of predictions via constraint (8).
Estimating G. Since we lack access to adversary’s model F , we estimate the jacobian G = ∇w logFsur(x;w) (Eq. 5) per input query x using a surrogate model Fsur. We empirically determined (details in Appendix E.1) choice of architecture of Fsur robust to choices of adversary’s architecture F . However, the initialization of Fsur plays a crucial role, with best results on a fixed randomly initialized model. We conjecture this occurs due to surrogate models with a high loss provide better gradient signals to guide the defender.
Heuristic Solver. Gradient-based strategies to optimize objective (Eq. 4) often leads to poor local maxima. This is in part due to the objective increasing in all directions around point y (assumingG is full-rank), making optimization sensitive to initialization. Consequently, we resort to a heuristic to solve for ỹ. Our approach is motivated by Hoffman (1981), who show that the maximum of a convex function over a compact convex set occurs at the extreme points of the set. Hence, our two-step solver: (i) searches for a maximizer y∗ for (4) by iterating over the K extremes yk (where yk=1) of the probability simplex ∆K ; and (ii) then computes a perturbed posterior ỹ as a linear interpolation of the original posteriors y and the maximizer y∗: ỹ = (1 − α)y + αy∗, where α is selected such that the utility constraint (Eq. 7) is satisfied. We further elaborate on the solver and present a pseudocode in Appendix C.
Variant: MAD-argmax. Within our defense formulation, we encode an additional constraint (Eq. 8) to preserve the accuracy of perturbed predictions. MAD-argmax variant helps us perform accuracy-preserving perturbations similar to prior work. But in contrast, the perturbations are constrained (Eq. 7) and are specifically introduced to maximize the MAD objective. We enforce the accuracy-preserving constraint in our solver by iterating over extremes of intersection of sets Eq.(6) and (8): ∆Kk = {y 0,1Ty = 1, yk ≥ yj , k 6= j} ⊆ ∆K .
5 EXPERIMENTAL RESULTS
5.1 EXPERIMENTAL SETUP
Victim Models and Datasets. We set up six victim models (see column ‘FV ’ in Table 1), each model trained on a popular image classification dataset. All models are trained using SGD (LR = 0.1) with momentum (0.5) for 30 (LeNet) or 100 epochs (VGG16), with a LR decay of 0.1 performed every 50 epochs. We train and evaluate each victim model on their respective train and test sets.
Attack Strategies. We hope to broadly address all DNN model stealing strategies during our defense evaluation. To achieve this, we consider attacks that vary in query data distributions (independent and synthetic; see Section 3) and strategies (random and adaptive). Specifically, in our experiments we use the following attack models: (i) Jacobian-based Data Augmentation ‘JBDA’ (Papernot et al., 2017b);
(ii,iii) ‘JB-self’ and ‘JB-top3’ (Juuti et al., 2019); and (iv) Knockoff Nets ‘knockoff’ (Orekondy et al., 2019); We follow the default configurations of the attacks where possible. A recap and implementation details of the attack models are available in Appendix D.
In all attack strategies, the adversary trains a model FA to minimize the cross-entropy loss on a transfer set (Dtransfer = {(xi, ỹi)}Bi=1) obtained by using the victim model FV to pseudo-label inputs xi (sampled or adaptively synthesized). By default, we use B=50K queries, which achieves reasonable performance for all attacks and additionally makes defense evaluation tractable. The size of the resulting transfer set (B=50K examples) is comparable (e.g., 1× for CIFAR10/100, 2.1× for Caltech256) to size of victim’s training set. In line with prior work (Papernot et al., 2016; Orekondy et al., 2019), we too find (Section 5.2.3) attack and defense performances are unaffected by choice of architectures, and hence use the victim architecture for the stolen model FA. Due to the complex parameterization of VGG-16 (100M+), we initialize the weights from a pretrained TinyImageNet or ImageNet model (except for the last FC layer, which is trained from scratch). All stolen models are trained using SGD (LR=0.1) with momentum (0.5) for 30 epochs (LeNet) and 100 epochs (VGG16). We find choices of attacker’s architecture and optimization does not undermine the defense (discussed in Section 5.2.3).
Effectiveness of Attacks. We evaluate accuracy of resulting stolen models from the attack strategies as-is on the victim’s test set, thereby allowing for a fair head-to-head comparison with the victim model (additional details in Appendix A and D). The stolen model test accuracies, along with undefended victim model FV accuracies are reported in Table 1. We observe for all six victim models, using just 50K black-box queries, attacks are able to significantly extract victim’s functionality e.g., >87% on MNIST. We find the knockoff attack to be the strongest, exhibiting reasonable performance even on complex victim models e.g., 74.6% (0.93×Acc(FV )) on Caltech256.
How Good are Existing Defenses? Most existing defenses in literature (Tramèr et al., 2016; Orekondy et al., 2019; Lee et al., 2018) perform some form of information truncation on the posterior probabilities e.g., rounding, returning top-k labels; all strategies preserve the rank of the most confident label. We now evaluate model stealing attacks on the extreme end of information truncation, wherein the defender returns just the top-1 ‘argmax’ label. This strategy illustrates a rough lower bound on the strength of the attacker when using existing defenses. Specific to knockoff, we observe the attacker is minimally impacted on simpler datasets (e.g., 0.2% accuracy drop on CIFAR10; see Fig. A5 in Appendix). While this has a larger impact on more complex datasets involving numerous classes (e.g., a maximum of 23.4% drop observed on CUB200), the strategy also introduces a significant perturbation (L1=1±0.5) to the posteriors. The results suggest existing defenses, which largely the top-1 label, are largely ineffective at mitigating model stealing attacks.
Defenses: Evaluation. We evaluate all defenses on a non-replicability vs. utility curve at various operating points of the defense. We furthermore evaluate the defenses for a large query budget (50K). We use as non-replicability the accuracy of the stolen model on held-out test data Dtest.
We use two utility metrics: (a) accuracy: test-accuracy of the defended model producing perturbed predictions on Dtest; and (b) perturbation magnitude : measured as L1 distance ||y − ỹ||1.
Defense: Baselines. We compare our approaches against three methods: (i) reverse-sigmoid (Lee et al., 2018): which softens the posterior distribution and introduces ambiguity among nonargmax probabilities. For this method, we evaluate non-replicability and utility metrics for the defense operating at various choices of their hyperparameter β ∈ [0, 1], while keeping their datasetspecific hyperparameter γ fixed (MNIST: 0.2, FashionMNIST: 0.4, CIFAR10: 0.1, rest: 0.2). (ii) random noise: For controlled random-noise, we add uniform random noise δz on the logit prediction scores (z̃ = z + δz , where z = log( y1−y )), enforce utility by projecting δz to an z-ball (Duchi et al., 2008), and renormalize probabilities ỹ = 1
1+e−z̃ . (iii) dp-sgd: while our method and
previous two baselines perturbs predictions, we also compare against introducing randomization to victim model parameters by training with the DP-SGD algorithm (Abadi et al., 2016). DP is a popular technique to protect the model against training data inference attacks. This baseline allows us to verify whether the same protection extends to model functionality.
5.2 RESULTS
In the follow sections, we demonstrate the effectiveness of our defense rigorously evaluated across a wide range of complex datasets, attack models, defense baselines, query, and utility budgets. For readability, we first evaluate the defense against attack models, proceed to comparing the defense against strong baselines and then provide an analysis of the defense.
5.2.1 MAD DEFENSE VS. ATTACKS
Figure 3 presents evaluation of our defenses MAD (Eq. 4-7) and MAD-argmax (Eq. 4-8) against the four attack models. To successfully mitigate attacks as a defender, we want the defense curves (colored solid lines with operating points denoted by thin crosses) to move away from undefended accuracies (denoted by circular discs, where =0.0) to ideal defense performances (cyan cross, where Acc(Def.) is unchanged and Acc(Att.) is chance-level). We observe from Figure 3 that by employing an identical defense across all datasets and attacks, the effectiveness of the attacker can be greatly reduced. Across all models, we find MAD provides reasonable operating points (above the diagonal), where defender achieves significantly higher test accuracies compared to the attacker. For instance, on MNIST, for <1% drop in defender’s accuracy, our defense simultaneously reduces accuracy of the jbtop3 attacker by 52% (87.3%→35.7%) and knockoff by 29% (99.1%→69.8%). We find similar promising results even on high-dimensional complex datasets e.g., on CUB200, a 23% (65.1%→41.9%) performance drop of knockoff for 2% drop in defender’s test performance. Our results indicate effective defenses are achievable, where the defender can trade-off a marginal utility cost to drastically impede the attacker.
5.2.2 MAD DEFENSE VS. BASELINE DEFENSES
We now study how our approach compares to baseline defenses, by evaluating the defenses against the knockoff attack (which resulted in the strongest attack in our experiments). From Figure 4, we observe:
Follow-up to Figure 4b (CIFAR10), but with attacker using only the argmax label.
(i) Utility objective = L1 distance (Fig. 4a): Although random-noise and reverse-sigmoid reduce attacker’s accuracy, the strategies in most cases involves larger perturbations. In contrast, MAD and MAD-argmax provides similar non-replicability (i.e., Acc(Att.)) with significantly lesser perturbation, especially at lower magnitudes. For instance, on MNIST (first column), MAD (L1 = 0.95) reduces the accuracy of the attacker to under 80% with 0.63× the perturbation as that of reversesigmoid and random-noise (L1 ≈ 1.5). (ii) Utility objective = argmax-preserving (Fig. 4b): By setting a hard constraint on retaining the label of the predictions, we find the accuracy-preserving defenses MAD-argmax and reverse-sigmoid successfully reduce the performance of the attacker by at least 20% across all datasets. In most cases, we find MAD-argmax in addition achieves this objective by introducing lesser distortion to the predictions compared to reverse-sigmoid. For instance, in Fig. 4a, we find MAD-argmax consistently reduce the attacker accuracy to the same amount at lesser L1 distances. In reversesigmoid, we attribute the large L1 perturbations to a shift in posteriors towards a uniform distribution e.g., mean entropy of perturbed predictions is 3.02 ± 0.16 (max-entropy = 3.32) at L1=1.0 for MNIST; in contrast, MAD-argmax displays a mean entropy of 1.79 ± 0.11. However, common to accuracy-preserving strategies is a pitfall that the top-1 label is retained. In Figure 5 (see overlapping red and yellow cross-marks), we present the results of training the attacker using only the top-1 label. In line with previous discussions, we find that the attacker is able to significantly recover the original performance of the stolen model for accuracy-preserving defenses MAD-argmax and reverse-sigmoid.
(iii) Non-replicability vs. utility trade-off (Fig. 4b): We now compare our defense MAD (blue lines) with baselines (rand-noise and dp-sgd) which trade-off utility to mitigate model stealing. Our results indicate MAD offers a better defense (lower attacker accuracies for similar defender accuracies). For instance, to reduce the attacker’s accuracy to <70%, while the defender’s accuracy significantly degrades using dp-sgd (39%) and rand-noise (56.4%), MAD involves a marginal decrease of 1%.
40 50 60 70 80 90 100 Acc(Attacker) ↓
0.0
0.2
0.4
0.6
0.8
1.0
1.2
1.4
||y − ỹ || 1 ↓
MNIST
0 20 40 60 80 100 Acc(Attacker) ↓
0
20
40
60
80
100
Ac c(
De fe
nd er
) ↑
MNIST
MAD MAD-argmax G = I y∗=rand ideal
Figure 8: MAD Ablation experiments. Utility = (left) L1 distance (right) defender test accuracy.
5.2.3 ANALYSIS
How much angular deviation does MAD introduce? To obtain insights on the angular deviation induced between the true and the perturbed gradient, we conduct an experiment by tracking the true gradient direction (which was unknown so far) at each training step. We simulate this by training an attacker model using online SGD (LR=0.001) over N iterations using B distinct images to query and a batch size of 1. At each step t of training, the attacker queries a randomly sampled input xt to the defender model and backpropogates the loss resulting from ỹt. In this particular experiment, the perturbation ỹt is crafted having exact knowledge of the attacker’s parameters. We evaluate the angular deviation between gradients with (a) and without (u) the perturbation.
In Figure 6, we visualize a histogram of deviations: θ = arccos u·a||u||||a|| , where u = ∇wL(wt,y, ·) and a = ∇wL(wt, ỹ, ·). We observe: (i) although our perturbation space is severely restricted (a low-dimensional probability simplex), we can introduce surprisingly high deviations (0-115◦) in the high-dimensional parameter space of the VGG16; (ii) for values at reasonable operating points which preserves the defender’s accuracy within 10% of the undefended accuracy (e.g., ∈ [0.95, 0.99] for CIFAR10), we see deviations with mean 24.9◦ (yellow bars in Fig. 6). This indicates that the perturbed gradient on an average leads to a slower decrease in loss function; (iii) on the extreme end, with = max = 2, on an average, we find the perturbations successfully flips (>90◦) the gradient direction leading to an increase on the test loss, as seen in Figure 7 (blue line). We also find the above observations reasonably transfers to a black-box attacker setting (see Appendix F.4), where the perturbations are crafted without knowledge of the attacker’s parameters. Overall, we find our approach considerably corrupts the attacker’s gradient direction.
Ablative Analysis. We present an ablation analysis of our approach in Figure 8. In this experiment, we compare our approach MAD and MAD-argmax to: (a) G = I: We substitute the jacobian G (Eq. 5) with a K ×K identity matrix; and (b) y∗=rand: Inner maximization term (Eq. 4) returns a random extreme of the simplex. Note that both (a) and (b) do not use the gradient information to perturb the posteriors.
From Figure 8, we observe: (i) poor performance of y∗=rand, indicating random untargeted perturbations of the posterior probability is a poor strategy; (ii) G = I , where the angular deviation is maximized between the posterior probability vectors is a slightly better strategy; (ii) MAD outperforms the above approaches. Consequently, we find using the gradient information (although a proxy to the attacker’s gradient signal) within our formulation (Equation 4) is crucial to providing better model stealing defenses.
Subverting the Defense. We now explore various strategies an attacker can use to circumvent the defense. To this end, we evaluate the following strategies: (a) argmax: attacker uses only the most-confident label during training; (b) arch-*: attacker trains other choices of architectures; (c) nquery: attacker queries each image multiple times; (d) nquery+aug: same as (c), but with random cropping and horizontal flipping; and (e) opt-*: attacker uses an adaptive LR optimizer e.g., ADAM (Kingma & Ba, 2014).
We present results over the subversion strategies in Figure 9. We find our defense robust to above strategies. Our results indicate that the best strategy for the attacker to circumvent our defense
is to discard the probabilities and rely only on the most confident label to train the stolen model. In accuracy-preserving defenses (see Fig. 5), this previously resulted in an adversary entirely circumventing the defense (recovering up to 1.0× original performance). In contrast, we find MAD is nonetheless effective in spite of the strategy, maintaining a 9% absolute accuracy reduction in attacker’s stolen performance.
6 CONCLUSION
In this work, we were motivated by limited success of existing defenses against DNN model stealing attacks. While prior work is largely based on passive defenses focusing on information truncation, we proposed the first active defense strategy that attacks the adversary’s training objective. We found our approach effective in defending a variety of victim models and against various attack strategies. In particular, we find our attack can reduce the accuracy of the adversary by up to 65%, without significantly affecting defender’s accuracy.
Acknowledgement. This research was partially supported by the German Research Foundation (DFG CRC 1223). We thank Paul Swoboda and David Stutz for helpful discussions.
Appendix
A OVERVIEW AND NOTATION
A
V collect$$$ train +defense
trainbuild
Utility Metric 2
Non-Replicability MetricUtility Metric 1
Victim/Defender
Adversary
Figure A1: Overview of Attack, Defense, and Evaluation Metrics. We consider an attacker A who exploits black-box access to defended model F δV to train a stolen model FA. In this paper, we take the role of the defender who intends to minimize replicability (i.e., Acc(FA,Dtest)), while maintaining utility of the predictions. We consider two notions of utility: (1) minimizing perturbations in predictions, measured here using L1 distance; and (2) maintaining accuracy of the defended model on test set Acc(F δV ,Dtest). Note that for a fair head-to-head comparison, we use the same held-out test set Dtest to evaluate accuracies of both the defended model F δV and stolen model FA. Similar to all prior work, we assumeDtrain,Dtest are drawn i.i.d from the same (victim) distribution DV . Notation used in the above figure is further elaborated in Table A1.
x Inputs (images ∈ RC×H×W ) y, ỹ Original, perturbed posterior predictions ∆K Probability simplex over K vertices
Attacker A PA(X) Attacker’s input data distribution Dtransfer Transfer set (= {(xi,yi)}, where xi ∼ PA(X), yi = FV (xi)) FA Attacker’s (stolen) model trained on Dtransfer
Victim/Defender V PV (X) Victim’s input data distribution Dtrain Training data (= {(xi,yi)}, where xi ∼ PV (X)) FV Undefended model trained on Dtrain F δV Defended model Dtest Test set (= {(xi,yi)}, where xi ∼ PV (X))
Table A1: Notation
B RELATED WORK: EXTENSION
A summary of existing model stealing attacks and defenses is presented in Table A2.
C DETAILED ALGORITHM
We present a detailed algorithm (see Algorithm 1) for our approach described in Section 4.
The algorithm roughly follows four steps:
(i) Predict (L2): Obtains posterior probability predictions y for input x using a victim model FV (x;wV ).
Black-box type Proposed Attack Proposed Defense
Input Query Data Adapt.? Strategy P/D? AP? AC
1. Lowd & Meek (2005) Linear Random Noise 3 - - - - 2. Nelson et al. (2009) Linear Labeled Data 7 Rejection D 7 1 3. Nelson et al. (2010) Linear Random Noise 3 - - - - 4. Alabdulmohsin et al. (2014) Linear Random Noise 3 Ensembling P 7 4 5. Tramèr et al. (2016) Linear, NN Random Noise † Rounding P 3 5 6. Milli et al. (2018) Linear, NN Random Noise 3 - - - - 7. Kesarwani et al. (2018) Decision Tree - - Detection D 3 5 8. Chandrasekaran et al. (2019) Linear Random Noise 3 Random Pert. P 7 -
9. Papernot et al. (2017b) CNN Synth. Data 3 - - - - 10. Correia-Silva et al. (2018) CNN Unlabeled Data 7 - - - - 11. Pal et al. (2019) CNN Unlabeled Data † - - - - 12. Orekondy et al. (2019) CNN* Unlabeled Data † Rounding, Top-k P 3 12 13. Jagielski et al. (2019) CNN* Unlabeled Data 3 - - - -
14. Juuti et al. (2019) CNN Synth. Data 3 Detection D 3 9,14 15. Lee et al. (2018) CNN - - Reverse sigmoid P 3 9 16. Ours CNN* - - Targeted Pert. P † 9,12,14
Table A2: Existing DNN Attacks and Defenses. Complements the discussion in Section 2. ‘CNN∗’: Complex ImageNet-like CNN. ‘†’: Both. ‘P/D’: Perturbation/Detection. ‘AP’: Accuracy preserving (i.e., maintains top-1 labels of predictions). ‘AC’: Attacks considered.
(ii) Estimate JacobianG (L3): We estimate a RK×D jacobian matrix on a surrogate model F . By default, we use as F a randomly initialized model (more details in Appendix E.1). Each row of G represents the gradient direction (in parameter space RD) over log likelihood of class k.
(iii) Maximize MAD Objective (L4): We find the optimal direction y∗ which maximizes the MAD objective (Eq. 3). To compute the arg max, we iterative over the K extremes of the probability simplex ∆K to find y∗ which maximizes the objective. The extreme yk denotes a probability vector with yk = 1.
(iv) Enforce Utility Constraint (L5-7): We enforce the perturbation utility constraint (Eq. 7) by considering a linear interpolation of y∗ and y. The resulting interpolation probability vector ỹ := h(α∗) represents the utility-constrained perturbed prediction that is returned instead of y.
1 Function PerturbedPredict-MAD(x): Input: Input data x, model to defend FV (·; wV ), proxy attacker model F (·; w) Output: Perturbed posterior probability ỹ ∈ ∆K s.t. dist(ỹ,y) ≤
2 y := FV (x; wV ) // Obtain K-dim posteriors 3 G := ∇w logF (x; w) // Pre-compute (K x D) Jacobian
4 y∗ := arg maxyk∈ext(∆K) ∥∥∥ GTyk||GTyk||2 − GTy||GTy||2 ∥∥∥22 // Alternatively ext(∆Kk ) for MAD-argmax 5 Define h(α) = (1− α)y + αy∗ 6 α∗ := arg maxα∈[0,1],dist(·)≤ dist(h(α), y ∗) // Find optimal step-size via
bisection, or OptStep(.) for Lp norms 7 ỹ := h(α∗) // Perturbed probabilities 8 return ỹ 9
10 Function OptStep(y,y∗, , p): 11 α∗ := max
{ ||y−y∗||p , 1 }
12 return α∗
Algorithm 1: MAD Defense. To supplement approach in Section 4
D ATTACK MODELS: RECAP AND IMPLEMENTATION DETAILS
Jacobian Based Data Augmentation (jbda) (Papernot et al., 2017b). The motivation of the approach is to obtain a surrogate of the victim black-box classifier, with an end-goal of performing evasion attacks (Biggio et al., 2013; Goodfellow et al., 2014). We restrict discussions primarily to the first part of constructing the surrogate. To obtain the surrogate (the stolen model), the authors depend on an unlabeled ‘seed’ set, typically from the same distribution as that used to train the victim model. As a result, the attacker assumes (mild) knowledge of the input data distribution and the class-label of the victim.
The key idea behind the approach is to query perturbations of inputs, to obtain a reasonable approximation of the decision boundary of the victim model. The attack strategy involves performing the following steps in a repeated manner: (i) images from the substitute set (initially the seed) D is labeled by querying the victim model FV as an oracle labeler; (ii) the surrogate model FA is trained on the substitute dataset; (iii) the substitute set is augmented using perturbations of existing images: Dρ+1 = Dρ ∪ {x+ λρ+1 · sgn(JF [FA(x)]) : x ∈ Dρ}, where J is the jacobian function. We use a seed set of: 100 (MNIST and FashionMNIST), 500 (CIFAR10, CUB200, Caltech256) and 1000 (CIFAR100). We use the default set of hyperparameters of Papernot et al. (2017b) in other respects.
Jacobian Based {self, top-k} (jbself, jbtop3) (Juuti et al., 2019) . The authors generalize the above approach, by extending the manner in which the synthetic samples are produced. In jbself, the jacobian is calculated w.r.t to k nearest classes and in jb-self, w.r.t the maximum a posterior class predicted by FA.
Knockoff Nets (knockoff) (Orekondy et al., 2019) . Knockoff is a recent attack model, which demonstrated model stealing can be performed without access to seed samples. Rather, the queries to the black-box involve natural images (which can be unrelated to the training data of the victim model) sampled from a large independent data source e.g., ImageNet1K. Consequently, no knowledge of the input data distribution nor the class-label space of the victim model is required to perform model stealing. The paper proposes two strategies on how to sample images to query: random and adaptive. We use the random strategy in the paper, since adaptive resulted in marginal increases in an open-world setup (which we have).
As the independent data sources in our knockoff attacks, we use: EMNIST-Letters (when stealing MNIST victim model), EMNIST (FashionMNIST), CIFAR100 (CIFAR10), CIFAR10 (CIFAR100), ImageNet1k (CUB200, Caltech256). Overlap between query images and the training data of the victim models are purely co-incidental.
We use the code from the project’s public github repository.
Evaluating Attacks. The resulting replica model FA from all the above attack strategies are evaluated on a held-out test set. We remark that the replica model is evaluated as-is, without additional finetuning or modifications. Similar to prior work, we evaluate the accuracies of FA on the victim’s held-out test set. Evaluating both stolen and the victim model on the same test set allows for fair head-to-head comparison.
E SUPPLEMENTARY ANALYSIS
In this section, we present additional analysis to supplement Section 5.2.3.
E.1 ESTIMATING G
Central to our defense is estimating the jacobian matrix G = ∇w logF (x;w) (Eq. 5), where F (·;w) is the attacker’s model. However, a defender with black-box attacker knowledge (where F is unknown) requires determining G by instead using a surrogate model Fsur. We determine choice of Fsur empirically by studying two factors: (a) architecture of Fsur: choice of defender’s surrogate architecture robust to varying attacker architectures (see Fig. A2); and (b) initialization of Fsur: initialization of the surrogate model parameters plays a crucial role in providing a better defense. We
0 25 50 75 100 Acc(Attacker) ↓
0
20
40
60
80
100
Ac c(
De fe
nd er
) ↑
CIFAR10 · Fsur = VGG16
FA VGG16 ResNet34 Densenet
Figure A2: Influence of attacker architecture choices on a fixed surrogate.
0 20 40 60 80 100 Acc(Attacker) ↓
0
20
40
60
80
100
Ac c(
De fe
nd er
) ↑
MNIST
0 20 40 60 80 100 Acc(Attacker) ↓
0
20
40
60
80
100 FashionMNIST
rand early mid late ideal defense
0 20 40 60 80 100 Acc(Attacker) ↓
0
20
40
60
80
100 CIFAR10
Figure A3: Influence of Initialization of a VGG16 Surrogate Model. ‘rand’ = random initialization, (‘early’, ’mid’, ’late’) = ∼(25, 50, 75)% test accuracy of surrogate on test set.
Undefended MAD MNIST 0.88 ± 14.41 6.47 ± 12.25 FashionMNIST 0.89 ± 15.76 6.65 ± 14.16 CIFAR10 1.93 ± 13.02 8.58 ± 15.02 CIFAR100 2.15 ± 18.82 69.26 ± 21.4 CUBS200 4.45 ± 9.66 446.93 ± 23.87 Caltech256 4.93 ± 21.25 815.97 ± 30.3
Table A3: Run times (in ms). We report the mean and standard deviation of predictions of undefended and defended models, computed over 10K predictions.
consider four choices of initialization: {‘rand’, ‘early’, ‘mid’, ‘late’} which exhibits approximately {chance-level 25%, 50%, 75%} test accuracies respectively. We observe (see Fig. A3) that a randomly initialized model, which is far from convergence, provides better gradient signals in crafting perturbations.
E.2 RUN-TIME ANALYSIS
We present the run-times of our defended and undefended models in Table A3. The reported numbers were summarized over 10K unique predictions performed on an Nvidia Tesla V100. We find our optimization procedure Eq. (4-7) for all models take under a second, with at most 0.8s in the case of Caltech256. The primary computational bottleneck of our defense implementation is estimating matrixG ∈ RK×D in Eq. 5, which currently requires performing K (i.e., number of output classes) backward passes through the surrogate model. Consequently, we find that our inference times on Caltech256 can be further reduced to 0.3s ± 0.04 by using a more efficient surrogate architecture (e.g., ResNet-34).
F ADDITIONAL PLOTS
F.1 ATTACKER EVALUATION
We present evaluation of all attacks considered in the paper on an undefended model in Figure A4. Furthermore, specific to the knockoff attack, we analyze how training using only the top-1 label (instead of complete posterior information) affects the attacker in Figure A5.
F.2 BUDGET VS. ACCURACY
We plot the budget (i.e., number of distinct black-box attack queries to the defender) vs. the test accuracy of the defender/attacker in Figure A6. The figure supplements Figure 1 and the discussion found in Section 5.2.1 of the main paper.
0 20k 40k # queries
0 20 40 60 80 100 Ac cu ra cy (A tta ck er )
MNIST
0 20k 40k # queries
0 20 40 60 80
100 FashionMNIST
0 20k 40k # queries
0 20 40 60 80
100 CIFAR10
jbda jbself jbtop3 knockoff
0 20k 40k # queries
0 20 40 60 80
100 CIFAR100
0 20k 40k # queries
0 20 40 60 80
100 CUBS200
0 20k 40k # queries
0 20 40 60 80
100 Caltech256
Figure A4: Evaluation of all attacks on undefended victim models.
0 20k 40k # queries
0 20 40 60 80
100
Ac cu
ra cy
(A tta
ck er
)
MNIST
0 20k 40k # queries
0 20 40 60 80
100 FashionMNIST
0 20k 40k # queries
0 20 40 60 80
100 CIFAR10
Posteriors Top-1 label
0 20k 40k # queries
0 20 40 60 80
100 CIFAR100
0 20k 40k # queries
0 20 40 60 80
100 CUBS200
0 20k 40k # queries
0 20 40 60 80
100 Caltech256
Figure A5: Stolen model trained using knockoff strategy on complete posterior information (y) and only the top-1 label of the posteriors (argmaxk yk).
0 10k 20k 30k 40k 50k Budget
0
20
40
60
80
100
M NI
ST -E
M NI
ST Le
tte rs
Te st
Ac cu
ra cy
= 0.1
MAD Undefended
Defender Attacker
0 10k 20k 30k 40k 50k Budget
= 0.5
0 10k 20k 30k 40k 50k Budget
= 0.99
0 10k 20k 30k 40k 50k Budget
= 1.1
0 10k 20k 30k 40k 50k Budget
0
20
40
60
80
100
Fa sh
ion M
NI ST
-E M
NI ST
Te st
Ac cu
ra cy
= 0.1
0 10k 20k 30k 40k 50k Budget
= 0.5
0 10k 20k 30k 40k 50k Budget
= 0.99
0 10k 20k 30k 40k 50k Budget
= 1.1
0 10k 20k 30k 40k 50k Budget
0
20
40
60
80
100
CI FA
R1 0-
CI FA
R1 00
Te st
Ac cu
ra cy
= 0.1
0 10k 20k 30k 40k 50k Budget
= 0.5
0 10k 20k 30k 40k 50k Budget
= 0.99
0 10k 20k 30k 40k 50k Budget
= 1.1
Figure A6: Budget vs. Test Accuracy. Supplements Fig. 3c in the main paper.
0 25 50 75 100 Acc(Attacker) ↓
0
20
40
60
80
100
Ac c(
De fe
nd er
) ↑
MNIST-EMNISTLetters
0 25 50 75 100 Acc(Attacker) ↓
0
20
40
60
80
100
Ac c(
De fe
nd er
) ↑
FashionMNIST-EMNIST
0 25 50 75 100 Acc(Attacker) ↓
0
20
40
60
80
100
Ac c(
De fe
nd er
) ↑
CIFAR10-CIFAR100
MAD MAD-argmax random-noise reverse-sigmoid ideal
Figure A7: Attacker argmax. Supplements Fig. 4 in the main paper.
0 10k 20k 30k 40k Iterations N
0.0
0.02
0.04
0.06
0.08
0.1
0.12
te st-
los s(
at ta
ck er
)
MNIST-EMNISTLetters
0 10k 20k 30k 40k Iterations N
0.01
0.02
0.03
0.04
0.05
0.06
0.07
te st-
los s(
at ta
ck er
)
FashionMNIST-EMNIST
0 20k 40k Iterations N
0.0
0.02
0.04
0.06
0.08
0.1 0.12 te stlos s( at ta ck er )
CIFAR10-CIFAR100
‘ = 0.01 ‘ = 0.1 ‘ = 0.5 ‘ = 1.0 ‘ = 2.0
Black-box setting
0
20
40
60 80100 120
140
160
180 1 10 100 1k 10k 100k 1m
76.5
37.8
17.1
2.9 0.3
0.01 0.1 0.5 1.0 2.0
0
20
40
60 80100 120
140
160
180 1 10 100 1k 10k 100k 1m
81.0
34.5
17.5
3.6 0.4
0.01 0.1 0.5 1.0 2.0
0
20
40
60 80100 120
140
160
180 1 10 100 1k 10k 100k 1m
91.3
37.0
18.8
3.5 0.3
0.01 0.1 0.5 1.0 2.0
0 20 40 Epoch
10≠3
10≠2
10≠1
100
te st-
los s(
at ta
ck er
)
MNIST
0 20 40 Epoch
10≠1
te st-
los s(
at ta
ck er
)
FashionMNIST
0 2 40 Epoch
10≠2
10≠1
100
te st-
los s(
at ta
ck er
)
CIFAR10
‘ = 0.01 ‘ = 0.1 ‘ = 0.5 ‘ = 1.0 ‘ = 2.0
MNIST FashionMNIST CIFAR10
Figure A8: Histogram of Angular Deviations (Black-box setting). Supplements Fig. 6 in the main paper. The test-loss during of the attacker model for each of the histograms (over multiple values) are provided in the bottom row.
F.3 ATTACKER ARGMAX
In Figure A7, we perform the non-replicability vs. utility evaluation (complementing Fig. 5 in the main paper) under a special situation: the attacker discards the probabilities and only uses the top-1 ‘argmax’ label to train the stolen model. Relevant discussion can be found in Section 5.2.2.
F.4 BLACK-BOX ANGULAR DEVIATIONS
In Figure A8, we provide the angular deviations obtained in a black-box setting over the course of training the attack model. We train the attacker model using the transfer set obtained by the knockoff approach (the strongest attacker in our experiments) for 50 epochs using a SGD (lr = 0.01, momentum = 0.5) and a batch size of 64. The experiment compliments our previous discussion in Section 5.2.3 of the main paper under “How much angular deviation does MAD introduce?”. As before, we estimate the angular deviations as: θ = arccos u·a||u||||a|| , where u = ∇wL(wt,y, ·) and a = ∇wL(wt, ỹ, ·). We observe from Figure A8: (i) the defensive angular deviations introduced by MAD to posterior predictions transfer to a black-box attacker setting, when crafting perturbations without access to the adversary’s model parameters; and (ii) although the setting introduces lower angular deviations at the extreme case of =2.0 (e.g., 114.7◦ → 76.5◦ in CIFAR10), we observe the perturbation sufficient to maximize the attacker’s test loss. We find significant angular deviations introduced by our approach in a black-box setting as well.
F.5 MAD ABLATION EXPERIMENTS
We present the ablation experiments covering all defender models in Figure A9. Relevant discussion is available in Section 5.2.3 of the main paper under “Ablative Analysis”.
40 60 80 100 Acc(Attacker) ↓
0.0
0.2
0.5
0.8
1.0
1.2
||y − ỹ || 1 ↓
MNIST-EMNISTLetters · B =50K
20 40 60 Acc(Attacker) ↓
0.0
0.2
0.5
0.8
1.0
1.2
||y − ỹ || 1 ↓
FashionMNIST-EMNIST · B =50K
40 60 80 Acc(Attacker) ↓
0.0
0.2
0.5
0.8
1.0
1.2 1.5 ||y − ỹ || 1 ↓
CIFAR10-CIFAR100 · B =50K
MAD MAD-argmax MAD-relax G = I y∗=rand ideal
0 25 50 75 100 Acc(Attacker) ↓
0
20
40
60
80
100
Ac c(
De fe
nd er
) ↑
0 25 50 75 100 Acc(Attacker) ↓
0
20
40
60
80
100
Ac c(
De fe
nd er
) ↑
0 25 50 75 100 Acc(Attacker) ↓
0
20
40
60
80
100
Ac c(
De fe
nd er
) ↑
Figure A9: MAD ablation experiments. Supplements Fig. 8 in the main paper. | 1. What is the focus of the paper regarding defense against model stealing attacks?
2. What are the strengths of the proposed approach, particularly in its significance and extensiveness?
3. Do you have any concerns or questions about the paper's content, such as clarification, notation, problem formulation, heuristics, solver details, and experimental choices?
4. How do you assess the novelty and value of the paper's contribution after considering the authors' responses? | Review | Review
This paper proposed an effective defense against model stealing attacks.
Merits:
1) In general, this paper is well written and easy to follow.
2) The approach is a significant supplement to existing defense against model stealing attacks.
3) Extensive experiments.
However, I still have concerns about the current version.
I will possibly adjust my score based on the authors' response.
1) In the model stealing setting, attacker and defender are seemingly knowledge limited. This should be clarified better in Sec. 3. It is important to highlight that the defender has no access to F_A, thus problem (4) is a black-box optimization problem for defense. Also, it is better to have a table to summarize the notations.
Additional questions on problem formulation:
a) Problem (4) only relies on the transfer set, where $x \sim P_A(x)$, right?
b) For evaluation metrics, utility and non-replicability, do they have the same D^{test}? How to determine them, in particularly for F_A?
c) One utility constraint is missing in problem (4). I noticed that it was mentioned in MAD-argmax, however, I suggest to add it to the formulation (4).
2) The details of heuristic solver are unclear. Although the authors pointed out the pseudocode in the appendix, it lacks detailed analysis.
3) In Estimating G, how to select the surrogate model? Moreover, in the experiment, the authors mentioned that defense performances are unaffected by choice of architectures, and hence use the victim architecture for the stolen model. If possible, could the author provide results on different architecture choices for the stolen model as well as the surrogate model?
############## Post-feedback ################
I am satisfied with the authors' response. Thus, I would like to keep my positive comments on this paper. Although the paper is between 6 and 8, I finally decide to increase my score to 8 due to its novelty in formulation and extensive experiments. |
ICLR | Title
Prediction Poisoning: Towards Defenses Against DNN Model Stealing Attacks
Abstract
High-performance Deep Neural Networks (DNNs) are increasingly deployed in many real-world applications e.g., cloud prediction APIs. Recent advances in model functionality stealing attacks via black-box access (i.e., inputs in, predictions out) threaten the business model of such applications, which require a lot of time, money, and effort to develop. Existing defenses take a passive role against stealing attacks, such as by truncating predicted information. We find such passive defenses ineffective against DNN stealing attacks. In this paper, we propose the first defense which actively perturbs predictions targeted at poisoning the training objective of the attacker. We find our defense effective across a wide range of challenging datasets and DNN model stealing attacks, and additionally outperforms existing defenses. Our defense is the first that can withstand highly accurate model stealing attacks for tens of thousands of queries, amplifying the attacker’s error rate up to a factor of 85×with minimal impact on the utility for benign users.
1 INTRODUCTION
Effectiveness of state-of-the-art DNN models at a variety of predictive tasks has encouraged their usage in a variety of real-world applications e.g., home assistants, autonomous vehicles, commercial cloud APIs. Models in such applications are valuable intellectual property of their creators, as developing them for commercial use is a product of intense labour and monetary effort. Hence, it is vital to preemptively identify and control threats from an adversarial lens focused at such models. In this work we address model stealing, which involves an adversary attempting to counterfeit the functionality of a target victim ML model by exploiting black-box access (query inputs in, posterior predictions out).
Stealing attacks dates back to Lowd & Meek (2005), who addressed reverse-engineering linear spam classification models. Recent literature predominantly focus on DNNs (specifically CNN image classifiers), and are shown to be highly effective (Tramèr et al., 2016) on complex models (Orekondy et al., 2019), even without knowledge of the victim’s architecture (Papernot et al., 2017b) nor the training data distribution. The attacks have also been shown to be highly effective at replicating pay-per-query image prediction APIs, for as little as $30 (Orekondy et al., 2019).
Defending against stealing attacks however has received little attention and is lacking. Existing defense strategies aim to either detect stealing query patterns (Juuti et al., 2019), or degrade quality of predicted posterior via perturbation. Since detection makes strong assumptions on the attacker’s query distribution (e.g., small L2 distances between successive queries), our focus is on the more popular perturbation-based defenses. A common theme among such defenses is accuracypreserving posterior perturbation: the posterior distribution is manipulated while retaining the top-1 label. For instance, rounding decimals (Tramèr et al., 2016), revealing only high-confidence predictions (Orekondy et al., 2019), and introducing ambiguity at the tail end of the posterior distribution (Lee et al., 2018). Such strategies benefit from preserving the accuracy metric of the defender. However, in line with previous works (Tramèr et al., 2016; Orekondy et al., 2019; Lee et al., 2018), we find models can be effectively stolen using just the top-1 predicted label returned by the black-box. Specifically, in many cases we observe <1% difference between attacks that use the full range of
posteriors (blue line in Fig. 1) to train stolen models and the top-1 label (orange line) alone. In this paper, we work towards effective defenses (red line in Fig. 1) against DNN stealing attacks with minimal impact to defender’s accuracy.
The main insight to our approach is that unlike a benign user, a model stealing attacker additionally uses the predictions to train a replica model. By introducing controlled perturbations to predictions, our approach targets poisoning the training objective (see Fig. 2). Our approach allows for a utility-preserving defense, as well as trading-off a marginal utility cost to significantly degrade attacker’s performance. As a practical benefit, the defense involves a single hyperparameter (perturbation utility budget) and can be used with minimal overhead to any classification model without retraining or modifications.
We rigorously evaluate our approach by defending six victim models, against four recent and effective DNN stealing attack strategies (Papernot et al., 2017b; Juuti et al., 2019; Orekondy et al., 2019). Our defense consistently mitigates all stealing attacks and further shows improvements over multiple baselines. In particular, we find our defenses degrades the attacker’s query sample efficiency by 1-2 orders of magnitude. Our approach significantly reduces the attacker’s performance (e.g., 30-53% reduction on MNIST and 13- 28% on CUB200) at a marginal cost (1-2%) to defender’s test accuracy. Furthermore, our approach can achieve the same level of mitigation as baseline defenses, but by introducing significantly lesser perturbation.
Contributions. (i) We propose the first utility-constrained defense against DNN model stealing attacks; (ii) We present the first active defense which poisons the attacker’s training objective by introducing bounded perturbations; and (iii) Through extensive experiments, we find our approach consistently mitigate various attacks and additionally outperform baselines.
2 RELATED LITERATURE
Model stealing attacks (also referred to as ‘extraction’ or ‘reverse-engineering’) in literature aim to infer hyperparameters (Oh et al., 2018; Wang & Gong, 2018), recover exact parameters (Lowd & Meek, 2005; Tramèr et al., 2016; Milli et al., 2018), or extract the functionality (Correia-Silva et al., 2018; Orekondy et al., 2019) of a target black-box ML model. In some cases, the extracted model information is optionally used to perform evasion attacks (Lowd & Meek, 2005; Nelson et al., 2010; Papernot et al., 2017b). The focus of our work is model functionality stealing, where the attacker’s yardstick is test-set accuracy of the stolen model. Initial works on stealing simple linear models (Lowd & Meek, 2005) have been recently succeeded by attacks shown to be effective on complex CNNs (Papernot et al., 2017b; Correia-Silva et al., 2018; Orekondy et al., 2019) (see Appendix B for an exhaustive list). In this work, we works towards defenses targeting the latter line of DNN model stealing attacks.
Since ML models are often deployed in untrusted environments, a long line of work exists on guaranteeing certain (often orthogonal) properties to safeguard against malicious users. The properties include security (e.g., robustness towards adversarial evasion attacks (Biggio et al., 2013; Goodfellow et al., 2014; Madry et al., 2018)) and integrity (e.g., running in untrusted environments (Tramer & Boneh, 2019)). To prevent leakage of private attributes (e.g., identities) specific to training data in the resulting ML model, differential privacy (DP) methods (Dwork et al., 2014) introduce randomization during training (Abadi et al., 2016; Papernot et al., 2017a). In contrast, our defense objective is to provide confidentiality and protect the functionality (intellectual property) of the ML model against illicit duplication.
Model stealing defenses are limited. Existing works (which is primarily in multiclass classification settings) aim to either detect stealing attacks (Juuti et al., 2019; Kesarwani et al., 2018; Nelson et al., 2009; Zheng et al., 2019) or perturb the posterior prediction. We focus on the latter since detection involves making strong assumptions on adversarial query patterns. Perturbation-based defenses are predominantly non-randomized and accuracy-preserving (i.e., top-1 label is unchanged). Approaches include revealing probabilities only of confident classes (Orekondy et al., 2019), rounding probabilities (Tramèr et al., 2016), or introducing ambiguity in posteriors (Lee et al., 2018). None of the existing defenses claim to mitigate model stealing, but rather they only marginally delay the attack by increasing the number of queries. Our work focuses on presenting an effective defense, significantly decreasing the attacker’s query sample efficiency within a principled utility-constrained framework.
3 PRELIMINARIES
Model Functionality Stealing. Model stealing attacks are cast as an interaction between two parties: a victim/defender V (‘teacher’ model) and an attacker A (‘student’ model). The only means of communication between the parties are via black-box queries: attacker queries inputs x ∈ X and defender returns a posterior probability distribution y ∈ ∆K = P (y|x) = FV (x), where ∆K = {y 0,1Ty = 1} is the probability simplex over K classes (we use K instead of K − 1 for notational convenience). The attack occurs in two (sometimes overlapping) phases: (i) querying: the attacker uses the black-box as an oracle labeler on a set of inputs to construct a ‘transfer set’ of input-prediction pairs Dtransfer = {(xi,yi)}Bi=1; and (ii) training: the attacker trains a model FA to minimize the empirical risk on Dtransfer. The end-goal of the attacker is to maximize accuracy on a held-out test-set (considered the same as that of the victim for evaluation purposes).
Knowledge-limited Attacker. In model stealing, attackers justifiably lack complete knowledge of the victim model FV . Of specific interest are the model architecture and the input data distribution to train the victim model PV (X) that are not known to the attacker. Since prior work (Hinton et al., 2015; Papernot et al., 2016; Orekondy et al., 2019) indicates functionality largely transfers across architecture choices, we now focus on the query data used by the attacker. Existing attacks can be broadly categorized based on inputs {x ∼ PA(X)} used to query the black-box: (a) independent distribution: (Tramèr et al., 2016; Correia-Silva et al., 2018; Orekondy et al., 2019) samples inputs from some distribution (e.g., ImageNet for images, uniform noise) independent to input data used to train the victim model; and (b) synthetic set: (Papernot et al., 2017b; Juuti et al., 2019) augment a limited set of seed data by adaptively querying perturbations (e.g., using FGSM) of existing inputs. We address both attack categories in our paper.
Defense Objectives. We perturb predictions in a controlled setting: ỹ = F δV (x) = y + δ s.t. ỹ,y ∈ ∆K . The defender has two (seemingly conflicting) objectives: (i) utility: such that perturbed predictions remain useful to a benign user. We consider two utility measures: (a) Acc(F δV ,Dtest): accuracy of defended model on test examples; and (b) dist(y, ỹ) = ||y− ỹ||p = to measure perturbation. (ii) non-replicability: to reduce the test accuracy of an attacker (denoted as Acc(FA,Dtest)) who exploits the predictions to train a replica FA on Dtransfer. For consistency, we evaluate both the defender’s and attacker’s stolen model accuracies on the same set of test examples Dtest.
Defender’s Assumptions. We closely mimic an assumption-free scenario similar to existing perturbation-based defenses. The scenario entails the knowledge-limited defender: (a) unaware whether a query is malicious or benign; (b) lacking prior knowledge of the strategy used by an attacker; and (c) perturbing each prediction independently (hence circumventing Sybil attacks). For added rigor, we also study attacker’s countermeasures to our defense in Section 5.
4 APPROACH: MAXIMIZING ANGULAR DEVIATION BETWEEN GRADIENTS
Motivation: Targeting First-order Approximations. We identify that the attacker eventually optimizes parameters of a stolen model F (·;w) (we drop the subscript ·A for readability) to minimize the loss on training examples {(xi, ỹi)}. Common to a majority of optimization algorithms is estimating the first-order approximation of the empirical loss, by computing the gradient of the loss
w.r.t. the model parameters w ∈ RD: u = −∇wL(F (x;w),y) (1)
Maximizing Angular Deviation (MAD). The core idea of our approach is to perturb the posterior probabilities y which results in an adversarial gradient signal that maximally deviates (see Fig. 2) from the original gradient (Eq. 1). More formally, we add targeted noise to the posteriors which results in a gradient direction:
a = −∇wL(F (x;w), ỹ) (2) to maximize the angular deviation between the original and the poisoned gradient signals:
max a 2(1− cos∠(a,u)) = max â ||â− û||22 (â = a/||a||2, û = u/||u||2) (3)
Given that the attacker model is trained to match the posterior predictions, such as by minimizing the cross-entropy loss L(y, ỹ) = − ∑ k ỹk log yk we rewrite Equation (2) as:
a = −∇wL(F (x;w), ỹ) = ∇w ∑ k ỹk logF (x;w)k = ∑ k ỹk∇w logF (x;w)k = GT ỹ
where G ∈ RK×D represents the Jacobian over log-likelihood predictions F (x;w) over K classes w.r.t. parameters w ∈ RD. By similarly rewriting Equation (1), substituting them in Equation (3) and including the constraints, we arrive at our poisoning objective (Eq. 4-7) of our approach which we refer to as MAD. We can optionally enforce preserving accuracy of poisoned prediction via constraint (8), which will be discussed shortly.
max ỹ ∥∥∥∥ GT ỹ||GT ỹ||2 − G Ty ||GTy||2 ∥∥∥∥2 2
(= H(ỹ)) (4)
where G = ∇w logF (x;w) (G ∈ RK×D) (5) s.t ỹ ∈ ∆K (Simplex constraint) (6)
dist(y, ỹ) ≤ (Utility constraint) (7) arg max
k ỹk = arg max k yk (For variant MAD-argmax) (8)
The above presents a challenge of black-box optimization problem for the defense since the defender justifiably lacks access to the attacker model F (Eq. 5). Apart from addressing this challenge in the next few paragraphs, we also discuss (a) solving a non-standard and non-convex constrained maximization objective; and (b) preserving accuracy of predictions via constraint (8).
Estimating G. Since we lack access to adversary’s model F , we estimate the jacobian G = ∇w logFsur(x;w) (Eq. 5) per input query x using a surrogate model Fsur. We empirically determined (details in Appendix E.1) choice of architecture of Fsur robust to choices of adversary’s architecture F . However, the initialization of Fsur plays a crucial role, with best results on a fixed randomly initialized model. We conjecture this occurs due to surrogate models with a high loss provide better gradient signals to guide the defender.
Heuristic Solver. Gradient-based strategies to optimize objective (Eq. 4) often leads to poor local maxima. This is in part due to the objective increasing in all directions around point y (assumingG is full-rank), making optimization sensitive to initialization. Consequently, we resort to a heuristic to solve for ỹ. Our approach is motivated by Hoffman (1981), who show that the maximum of a convex function over a compact convex set occurs at the extreme points of the set. Hence, our two-step solver: (i) searches for a maximizer y∗ for (4) by iterating over the K extremes yk (where yk=1) of the probability simplex ∆K ; and (ii) then computes a perturbed posterior ỹ as a linear interpolation of the original posteriors y and the maximizer y∗: ỹ = (1 − α)y + αy∗, where α is selected such that the utility constraint (Eq. 7) is satisfied. We further elaborate on the solver and present a pseudocode in Appendix C.
Variant: MAD-argmax. Within our defense formulation, we encode an additional constraint (Eq. 8) to preserve the accuracy of perturbed predictions. MAD-argmax variant helps us perform accuracy-preserving perturbations similar to prior work. But in contrast, the perturbations are constrained (Eq. 7) and are specifically introduced to maximize the MAD objective. We enforce the accuracy-preserving constraint in our solver by iterating over extremes of intersection of sets Eq.(6) and (8): ∆Kk = {y 0,1Ty = 1, yk ≥ yj , k 6= j} ⊆ ∆K .
5 EXPERIMENTAL RESULTS
5.1 EXPERIMENTAL SETUP
Victim Models and Datasets. We set up six victim models (see column ‘FV ’ in Table 1), each model trained on a popular image classification dataset. All models are trained using SGD (LR = 0.1) with momentum (0.5) for 30 (LeNet) or 100 epochs (VGG16), with a LR decay of 0.1 performed every 50 epochs. We train and evaluate each victim model on their respective train and test sets.
Attack Strategies. We hope to broadly address all DNN model stealing strategies during our defense evaluation. To achieve this, we consider attacks that vary in query data distributions (independent and synthetic; see Section 3) and strategies (random and adaptive). Specifically, in our experiments we use the following attack models: (i) Jacobian-based Data Augmentation ‘JBDA’ (Papernot et al., 2017b);
(ii,iii) ‘JB-self’ and ‘JB-top3’ (Juuti et al., 2019); and (iv) Knockoff Nets ‘knockoff’ (Orekondy et al., 2019); We follow the default configurations of the attacks where possible. A recap and implementation details of the attack models are available in Appendix D.
In all attack strategies, the adversary trains a model FA to minimize the cross-entropy loss on a transfer set (Dtransfer = {(xi, ỹi)}Bi=1) obtained by using the victim model FV to pseudo-label inputs xi (sampled or adaptively synthesized). By default, we use B=50K queries, which achieves reasonable performance for all attacks and additionally makes defense evaluation tractable. The size of the resulting transfer set (B=50K examples) is comparable (e.g., 1× for CIFAR10/100, 2.1× for Caltech256) to size of victim’s training set. In line with prior work (Papernot et al., 2016; Orekondy et al., 2019), we too find (Section 5.2.3) attack and defense performances are unaffected by choice of architectures, and hence use the victim architecture for the stolen model FA. Due to the complex parameterization of VGG-16 (100M+), we initialize the weights from a pretrained TinyImageNet or ImageNet model (except for the last FC layer, which is trained from scratch). All stolen models are trained using SGD (LR=0.1) with momentum (0.5) for 30 epochs (LeNet) and 100 epochs (VGG16). We find choices of attacker’s architecture and optimization does not undermine the defense (discussed in Section 5.2.3).
Effectiveness of Attacks. We evaluate accuracy of resulting stolen models from the attack strategies as-is on the victim’s test set, thereby allowing for a fair head-to-head comparison with the victim model (additional details in Appendix A and D). The stolen model test accuracies, along with undefended victim model FV accuracies are reported in Table 1. We observe for all six victim models, using just 50K black-box queries, attacks are able to significantly extract victim’s functionality e.g., >87% on MNIST. We find the knockoff attack to be the strongest, exhibiting reasonable performance even on complex victim models e.g., 74.6% (0.93×Acc(FV )) on Caltech256.
How Good are Existing Defenses? Most existing defenses in literature (Tramèr et al., 2016; Orekondy et al., 2019; Lee et al., 2018) perform some form of information truncation on the posterior probabilities e.g., rounding, returning top-k labels; all strategies preserve the rank of the most confident label. We now evaluate model stealing attacks on the extreme end of information truncation, wherein the defender returns just the top-1 ‘argmax’ label. This strategy illustrates a rough lower bound on the strength of the attacker when using existing defenses. Specific to knockoff, we observe the attacker is minimally impacted on simpler datasets (e.g., 0.2% accuracy drop on CIFAR10; see Fig. A5 in Appendix). While this has a larger impact on more complex datasets involving numerous classes (e.g., a maximum of 23.4% drop observed on CUB200), the strategy also introduces a significant perturbation (L1=1±0.5) to the posteriors. The results suggest existing defenses, which largely the top-1 label, are largely ineffective at mitigating model stealing attacks.
Defenses: Evaluation. We evaluate all defenses on a non-replicability vs. utility curve at various operating points of the defense. We furthermore evaluate the defenses for a large query budget (50K). We use as non-replicability the accuracy of the stolen model on held-out test data Dtest.
We use two utility metrics: (a) accuracy: test-accuracy of the defended model producing perturbed predictions on Dtest; and (b) perturbation magnitude : measured as L1 distance ||y − ỹ||1.
Defense: Baselines. We compare our approaches against three methods: (i) reverse-sigmoid (Lee et al., 2018): which softens the posterior distribution and introduces ambiguity among nonargmax probabilities. For this method, we evaluate non-replicability and utility metrics for the defense operating at various choices of their hyperparameter β ∈ [0, 1], while keeping their datasetspecific hyperparameter γ fixed (MNIST: 0.2, FashionMNIST: 0.4, CIFAR10: 0.1, rest: 0.2). (ii) random noise: For controlled random-noise, we add uniform random noise δz on the logit prediction scores (z̃ = z + δz , where z = log( y1−y )), enforce utility by projecting δz to an z-ball (Duchi et al., 2008), and renormalize probabilities ỹ = 1
1+e−z̃ . (iii) dp-sgd: while our method and
previous two baselines perturbs predictions, we also compare against introducing randomization to victim model parameters by training with the DP-SGD algorithm (Abadi et al., 2016). DP is a popular technique to protect the model against training data inference attacks. This baseline allows us to verify whether the same protection extends to model functionality.
5.2 RESULTS
In the follow sections, we demonstrate the effectiveness of our defense rigorously evaluated across a wide range of complex datasets, attack models, defense baselines, query, and utility budgets. For readability, we first evaluate the defense against attack models, proceed to comparing the defense against strong baselines and then provide an analysis of the defense.
5.2.1 MAD DEFENSE VS. ATTACKS
Figure 3 presents evaluation of our defenses MAD (Eq. 4-7) and MAD-argmax (Eq. 4-8) against the four attack models. To successfully mitigate attacks as a defender, we want the defense curves (colored solid lines with operating points denoted by thin crosses) to move away from undefended accuracies (denoted by circular discs, where =0.0) to ideal defense performances (cyan cross, where Acc(Def.) is unchanged and Acc(Att.) is chance-level). We observe from Figure 3 that by employing an identical defense across all datasets and attacks, the effectiveness of the attacker can be greatly reduced. Across all models, we find MAD provides reasonable operating points (above the diagonal), where defender achieves significantly higher test accuracies compared to the attacker. For instance, on MNIST, for <1% drop in defender’s accuracy, our defense simultaneously reduces accuracy of the jbtop3 attacker by 52% (87.3%→35.7%) and knockoff by 29% (99.1%→69.8%). We find similar promising results even on high-dimensional complex datasets e.g., on CUB200, a 23% (65.1%→41.9%) performance drop of knockoff for 2% drop in defender’s test performance. Our results indicate effective defenses are achievable, where the defender can trade-off a marginal utility cost to drastically impede the attacker.
5.2.2 MAD DEFENSE VS. BASELINE DEFENSES
We now study how our approach compares to baseline defenses, by evaluating the defenses against the knockoff attack (which resulted in the strongest attack in our experiments). From Figure 4, we observe:
Follow-up to Figure 4b (CIFAR10), but with attacker using only the argmax label.
(i) Utility objective = L1 distance (Fig. 4a): Although random-noise and reverse-sigmoid reduce attacker’s accuracy, the strategies in most cases involves larger perturbations. In contrast, MAD and MAD-argmax provides similar non-replicability (i.e., Acc(Att.)) with significantly lesser perturbation, especially at lower magnitudes. For instance, on MNIST (first column), MAD (L1 = 0.95) reduces the accuracy of the attacker to under 80% with 0.63× the perturbation as that of reversesigmoid and random-noise (L1 ≈ 1.5). (ii) Utility objective = argmax-preserving (Fig. 4b): By setting a hard constraint on retaining the label of the predictions, we find the accuracy-preserving defenses MAD-argmax and reverse-sigmoid successfully reduce the performance of the attacker by at least 20% across all datasets. In most cases, we find MAD-argmax in addition achieves this objective by introducing lesser distortion to the predictions compared to reverse-sigmoid. For instance, in Fig. 4a, we find MAD-argmax consistently reduce the attacker accuracy to the same amount at lesser L1 distances. In reversesigmoid, we attribute the large L1 perturbations to a shift in posteriors towards a uniform distribution e.g., mean entropy of perturbed predictions is 3.02 ± 0.16 (max-entropy = 3.32) at L1=1.0 for MNIST; in contrast, MAD-argmax displays a mean entropy of 1.79 ± 0.11. However, common to accuracy-preserving strategies is a pitfall that the top-1 label is retained. In Figure 5 (see overlapping red and yellow cross-marks), we present the results of training the attacker using only the top-1 label. In line with previous discussions, we find that the attacker is able to significantly recover the original performance of the stolen model for accuracy-preserving defenses MAD-argmax and reverse-sigmoid.
(iii) Non-replicability vs. utility trade-off (Fig. 4b): We now compare our defense MAD (blue lines) with baselines (rand-noise and dp-sgd) which trade-off utility to mitigate model stealing. Our results indicate MAD offers a better defense (lower attacker accuracies for similar defender accuracies). For instance, to reduce the attacker’s accuracy to <70%, while the defender’s accuracy significantly degrades using dp-sgd (39%) and rand-noise (56.4%), MAD involves a marginal decrease of 1%.
40 50 60 70 80 90 100 Acc(Attacker) ↓
0.0
0.2
0.4
0.6
0.8
1.0
1.2
1.4
||y − ỹ || 1 ↓
MNIST
0 20 40 60 80 100 Acc(Attacker) ↓
0
20
40
60
80
100
Ac c(
De fe
nd er
) ↑
MNIST
MAD MAD-argmax G = I y∗=rand ideal
Figure 8: MAD Ablation experiments. Utility = (left) L1 distance (right) defender test accuracy.
5.2.3 ANALYSIS
How much angular deviation does MAD introduce? To obtain insights on the angular deviation induced between the true and the perturbed gradient, we conduct an experiment by tracking the true gradient direction (which was unknown so far) at each training step. We simulate this by training an attacker model using online SGD (LR=0.001) over N iterations using B distinct images to query and a batch size of 1. At each step t of training, the attacker queries a randomly sampled input xt to the defender model and backpropogates the loss resulting from ỹt. In this particular experiment, the perturbation ỹt is crafted having exact knowledge of the attacker’s parameters. We evaluate the angular deviation between gradients with (a) and without (u) the perturbation.
In Figure 6, we visualize a histogram of deviations: θ = arccos u·a||u||||a|| , where u = ∇wL(wt,y, ·) and a = ∇wL(wt, ỹ, ·). We observe: (i) although our perturbation space is severely restricted (a low-dimensional probability simplex), we can introduce surprisingly high deviations (0-115◦) in the high-dimensional parameter space of the VGG16; (ii) for values at reasonable operating points which preserves the defender’s accuracy within 10% of the undefended accuracy (e.g., ∈ [0.95, 0.99] for CIFAR10), we see deviations with mean 24.9◦ (yellow bars in Fig. 6). This indicates that the perturbed gradient on an average leads to a slower decrease in loss function; (iii) on the extreme end, with = max = 2, on an average, we find the perturbations successfully flips (>90◦) the gradient direction leading to an increase on the test loss, as seen in Figure 7 (blue line). We also find the above observations reasonably transfers to a black-box attacker setting (see Appendix F.4), where the perturbations are crafted without knowledge of the attacker’s parameters. Overall, we find our approach considerably corrupts the attacker’s gradient direction.
Ablative Analysis. We present an ablation analysis of our approach in Figure 8. In this experiment, we compare our approach MAD and MAD-argmax to: (a) G = I: We substitute the jacobian G (Eq. 5) with a K ×K identity matrix; and (b) y∗=rand: Inner maximization term (Eq. 4) returns a random extreme of the simplex. Note that both (a) and (b) do not use the gradient information to perturb the posteriors.
From Figure 8, we observe: (i) poor performance of y∗=rand, indicating random untargeted perturbations of the posterior probability is a poor strategy; (ii) G = I , where the angular deviation is maximized between the posterior probability vectors is a slightly better strategy; (ii) MAD outperforms the above approaches. Consequently, we find using the gradient information (although a proxy to the attacker’s gradient signal) within our formulation (Equation 4) is crucial to providing better model stealing defenses.
Subverting the Defense. We now explore various strategies an attacker can use to circumvent the defense. To this end, we evaluate the following strategies: (a) argmax: attacker uses only the most-confident label during training; (b) arch-*: attacker trains other choices of architectures; (c) nquery: attacker queries each image multiple times; (d) nquery+aug: same as (c), but with random cropping and horizontal flipping; and (e) opt-*: attacker uses an adaptive LR optimizer e.g., ADAM (Kingma & Ba, 2014).
We present results over the subversion strategies in Figure 9. We find our defense robust to above strategies. Our results indicate that the best strategy for the attacker to circumvent our defense
is to discard the probabilities and rely only on the most confident label to train the stolen model. In accuracy-preserving defenses (see Fig. 5), this previously resulted in an adversary entirely circumventing the defense (recovering up to 1.0× original performance). In contrast, we find MAD is nonetheless effective in spite of the strategy, maintaining a 9% absolute accuracy reduction in attacker’s stolen performance.
6 CONCLUSION
In this work, we were motivated by limited success of existing defenses against DNN model stealing attacks. While prior work is largely based on passive defenses focusing on information truncation, we proposed the first active defense strategy that attacks the adversary’s training objective. We found our approach effective in defending a variety of victim models and against various attack strategies. In particular, we find our attack can reduce the accuracy of the adversary by up to 65%, without significantly affecting defender’s accuracy.
Acknowledgement. This research was partially supported by the German Research Foundation (DFG CRC 1223). We thank Paul Swoboda and David Stutz for helpful discussions.
Appendix
A OVERVIEW AND NOTATION
A
V collect$$$ train +defense
trainbuild
Utility Metric 2
Non-Replicability MetricUtility Metric 1
Victim/Defender
Adversary
Figure A1: Overview of Attack, Defense, and Evaluation Metrics. We consider an attacker A who exploits black-box access to defended model F δV to train a stolen model FA. In this paper, we take the role of the defender who intends to minimize replicability (i.e., Acc(FA,Dtest)), while maintaining utility of the predictions. We consider two notions of utility: (1) minimizing perturbations in predictions, measured here using L1 distance; and (2) maintaining accuracy of the defended model on test set Acc(F δV ,Dtest). Note that for a fair head-to-head comparison, we use the same held-out test set Dtest to evaluate accuracies of both the defended model F δV and stolen model FA. Similar to all prior work, we assumeDtrain,Dtest are drawn i.i.d from the same (victim) distribution DV . Notation used in the above figure is further elaborated in Table A1.
x Inputs (images ∈ RC×H×W ) y, ỹ Original, perturbed posterior predictions ∆K Probability simplex over K vertices
Attacker A PA(X) Attacker’s input data distribution Dtransfer Transfer set (= {(xi,yi)}, where xi ∼ PA(X), yi = FV (xi)) FA Attacker’s (stolen) model trained on Dtransfer
Victim/Defender V PV (X) Victim’s input data distribution Dtrain Training data (= {(xi,yi)}, where xi ∼ PV (X)) FV Undefended model trained on Dtrain F δV Defended model Dtest Test set (= {(xi,yi)}, where xi ∼ PV (X))
Table A1: Notation
B RELATED WORK: EXTENSION
A summary of existing model stealing attacks and defenses is presented in Table A2.
C DETAILED ALGORITHM
We present a detailed algorithm (see Algorithm 1) for our approach described in Section 4.
The algorithm roughly follows four steps:
(i) Predict (L2): Obtains posterior probability predictions y for input x using a victim model FV (x;wV ).
Black-box type Proposed Attack Proposed Defense
Input Query Data Adapt.? Strategy P/D? AP? AC
1. Lowd & Meek (2005) Linear Random Noise 3 - - - - 2. Nelson et al. (2009) Linear Labeled Data 7 Rejection D 7 1 3. Nelson et al. (2010) Linear Random Noise 3 - - - - 4. Alabdulmohsin et al. (2014) Linear Random Noise 3 Ensembling P 7 4 5. Tramèr et al. (2016) Linear, NN Random Noise † Rounding P 3 5 6. Milli et al. (2018) Linear, NN Random Noise 3 - - - - 7. Kesarwani et al. (2018) Decision Tree - - Detection D 3 5 8. Chandrasekaran et al. (2019) Linear Random Noise 3 Random Pert. P 7 -
9. Papernot et al. (2017b) CNN Synth. Data 3 - - - - 10. Correia-Silva et al. (2018) CNN Unlabeled Data 7 - - - - 11. Pal et al. (2019) CNN Unlabeled Data † - - - - 12. Orekondy et al. (2019) CNN* Unlabeled Data † Rounding, Top-k P 3 12 13. Jagielski et al. (2019) CNN* Unlabeled Data 3 - - - -
14. Juuti et al. (2019) CNN Synth. Data 3 Detection D 3 9,14 15. Lee et al. (2018) CNN - - Reverse sigmoid P 3 9 16. Ours CNN* - - Targeted Pert. P † 9,12,14
Table A2: Existing DNN Attacks and Defenses. Complements the discussion in Section 2. ‘CNN∗’: Complex ImageNet-like CNN. ‘†’: Both. ‘P/D’: Perturbation/Detection. ‘AP’: Accuracy preserving (i.e., maintains top-1 labels of predictions). ‘AC’: Attacks considered.
(ii) Estimate JacobianG (L3): We estimate a RK×D jacobian matrix on a surrogate model F . By default, we use as F a randomly initialized model (more details in Appendix E.1). Each row of G represents the gradient direction (in parameter space RD) over log likelihood of class k.
(iii) Maximize MAD Objective (L4): We find the optimal direction y∗ which maximizes the MAD objective (Eq. 3). To compute the arg max, we iterative over the K extremes of the probability simplex ∆K to find y∗ which maximizes the objective. The extreme yk denotes a probability vector with yk = 1.
(iv) Enforce Utility Constraint (L5-7): We enforce the perturbation utility constraint (Eq. 7) by considering a linear interpolation of y∗ and y. The resulting interpolation probability vector ỹ := h(α∗) represents the utility-constrained perturbed prediction that is returned instead of y.
1 Function PerturbedPredict-MAD(x): Input: Input data x, model to defend FV (·; wV ), proxy attacker model F (·; w) Output: Perturbed posterior probability ỹ ∈ ∆K s.t. dist(ỹ,y) ≤
2 y := FV (x; wV ) // Obtain K-dim posteriors 3 G := ∇w logF (x; w) // Pre-compute (K x D) Jacobian
4 y∗ := arg maxyk∈ext(∆K) ∥∥∥ GTyk||GTyk||2 − GTy||GTy||2 ∥∥∥22 // Alternatively ext(∆Kk ) for MAD-argmax 5 Define h(α) = (1− α)y + αy∗ 6 α∗ := arg maxα∈[0,1],dist(·)≤ dist(h(α), y ∗) // Find optimal step-size via
bisection, or OptStep(.) for Lp norms 7 ỹ := h(α∗) // Perturbed probabilities 8 return ỹ 9
10 Function OptStep(y,y∗, , p): 11 α∗ := max
{ ||y−y∗||p , 1 }
12 return α∗
Algorithm 1: MAD Defense. To supplement approach in Section 4
D ATTACK MODELS: RECAP AND IMPLEMENTATION DETAILS
Jacobian Based Data Augmentation (jbda) (Papernot et al., 2017b). The motivation of the approach is to obtain a surrogate of the victim black-box classifier, with an end-goal of performing evasion attacks (Biggio et al., 2013; Goodfellow et al., 2014). We restrict discussions primarily to the first part of constructing the surrogate. To obtain the surrogate (the stolen model), the authors depend on an unlabeled ‘seed’ set, typically from the same distribution as that used to train the victim model. As a result, the attacker assumes (mild) knowledge of the input data distribution and the class-label of the victim.
The key idea behind the approach is to query perturbations of inputs, to obtain a reasonable approximation of the decision boundary of the victim model. The attack strategy involves performing the following steps in a repeated manner: (i) images from the substitute set (initially the seed) D is labeled by querying the victim model FV as an oracle labeler; (ii) the surrogate model FA is trained on the substitute dataset; (iii) the substitute set is augmented using perturbations of existing images: Dρ+1 = Dρ ∪ {x+ λρ+1 · sgn(JF [FA(x)]) : x ∈ Dρ}, where J is the jacobian function. We use a seed set of: 100 (MNIST and FashionMNIST), 500 (CIFAR10, CUB200, Caltech256) and 1000 (CIFAR100). We use the default set of hyperparameters of Papernot et al. (2017b) in other respects.
Jacobian Based {self, top-k} (jbself, jbtop3) (Juuti et al., 2019) . The authors generalize the above approach, by extending the manner in which the synthetic samples are produced. In jbself, the jacobian is calculated w.r.t to k nearest classes and in jb-self, w.r.t the maximum a posterior class predicted by FA.
Knockoff Nets (knockoff) (Orekondy et al., 2019) . Knockoff is a recent attack model, which demonstrated model stealing can be performed without access to seed samples. Rather, the queries to the black-box involve natural images (which can be unrelated to the training data of the victim model) sampled from a large independent data source e.g., ImageNet1K. Consequently, no knowledge of the input data distribution nor the class-label space of the victim model is required to perform model stealing. The paper proposes two strategies on how to sample images to query: random and adaptive. We use the random strategy in the paper, since adaptive resulted in marginal increases in an open-world setup (which we have).
As the independent data sources in our knockoff attacks, we use: EMNIST-Letters (when stealing MNIST victim model), EMNIST (FashionMNIST), CIFAR100 (CIFAR10), CIFAR10 (CIFAR100), ImageNet1k (CUB200, Caltech256). Overlap between query images and the training data of the victim models are purely co-incidental.
We use the code from the project’s public github repository.
Evaluating Attacks. The resulting replica model FA from all the above attack strategies are evaluated on a held-out test set. We remark that the replica model is evaluated as-is, without additional finetuning or modifications. Similar to prior work, we evaluate the accuracies of FA on the victim’s held-out test set. Evaluating both stolen and the victim model on the same test set allows for fair head-to-head comparison.
E SUPPLEMENTARY ANALYSIS
In this section, we present additional analysis to supplement Section 5.2.3.
E.1 ESTIMATING G
Central to our defense is estimating the jacobian matrix G = ∇w logF (x;w) (Eq. 5), where F (·;w) is the attacker’s model. However, a defender with black-box attacker knowledge (where F is unknown) requires determining G by instead using a surrogate model Fsur. We determine choice of Fsur empirically by studying two factors: (a) architecture of Fsur: choice of defender’s surrogate architecture robust to varying attacker architectures (see Fig. A2); and (b) initialization of Fsur: initialization of the surrogate model parameters plays a crucial role in providing a better defense. We
0 25 50 75 100 Acc(Attacker) ↓
0
20
40
60
80
100
Ac c(
De fe
nd er
) ↑
CIFAR10 · Fsur = VGG16
FA VGG16 ResNet34 Densenet
Figure A2: Influence of attacker architecture choices on a fixed surrogate.
0 20 40 60 80 100 Acc(Attacker) ↓
0
20
40
60
80
100
Ac c(
De fe
nd er
) ↑
MNIST
0 20 40 60 80 100 Acc(Attacker) ↓
0
20
40
60
80
100 FashionMNIST
rand early mid late ideal defense
0 20 40 60 80 100 Acc(Attacker) ↓
0
20
40
60
80
100 CIFAR10
Figure A3: Influence of Initialization of a VGG16 Surrogate Model. ‘rand’ = random initialization, (‘early’, ’mid’, ’late’) = ∼(25, 50, 75)% test accuracy of surrogate on test set.
Undefended MAD MNIST 0.88 ± 14.41 6.47 ± 12.25 FashionMNIST 0.89 ± 15.76 6.65 ± 14.16 CIFAR10 1.93 ± 13.02 8.58 ± 15.02 CIFAR100 2.15 ± 18.82 69.26 ± 21.4 CUBS200 4.45 ± 9.66 446.93 ± 23.87 Caltech256 4.93 ± 21.25 815.97 ± 30.3
Table A3: Run times (in ms). We report the mean and standard deviation of predictions of undefended and defended models, computed over 10K predictions.
consider four choices of initialization: {‘rand’, ‘early’, ‘mid’, ‘late’} which exhibits approximately {chance-level 25%, 50%, 75%} test accuracies respectively. We observe (see Fig. A3) that a randomly initialized model, which is far from convergence, provides better gradient signals in crafting perturbations.
E.2 RUN-TIME ANALYSIS
We present the run-times of our defended and undefended models in Table A3. The reported numbers were summarized over 10K unique predictions performed on an Nvidia Tesla V100. We find our optimization procedure Eq. (4-7) for all models take under a second, with at most 0.8s in the case of Caltech256. The primary computational bottleneck of our defense implementation is estimating matrixG ∈ RK×D in Eq. 5, which currently requires performing K (i.e., number of output classes) backward passes through the surrogate model. Consequently, we find that our inference times on Caltech256 can be further reduced to 0.3s ± 0.04 by using a more efficient surrogate architecture (e.g., ResNet-34).
F ADDITIONAL PLOTS
F.1 ATTACKER EVALUATION
We present evaluation of all attacks considered in the paper on an undefended model in Figure A4. Furthermore, specific to the knockoff attack, we analyze how training using only the top-1 label (instead of complete posterior information) affects the attacker in Figure A5.
F.2 BUDGET VS. ACCURACY
We plot the budget (i.e., number of distinct black-box attack queries to the defender) vs. the test accuracy of the defender/attacker in Figure A6. The figure supplements Figure 1 and the discussion found in Section 5.2.1 of the main paper.
0 20k 40k # queries
0 20 40 60 80 100 Ac cu ra cy (A tta ck er )
MNIST
0 20k 40k # queries
0 20 40 60 80
100 FashionMNIST
0 20k 40k # queries
0 20 40 60 80
100 CIFAR10
jbda jbself jbtop3 knockoff
0 20k 40k # queries
0 20 40 60 80
100 CIFAR100
0 20k 40k # queries
0 20 40 60 80
100 CUBS200
0 20k 40k # queries
0 20 40 60 80
100 Caltech256
Figure A4: Evaluation of all attacks on undefended victim models.
0 20k 40k # queries
0 20 40 60 80
100
Ac cu
ra cy
(A tta
ck er
)
MNIST
0 20k 40k # queries
0 20 40 60 80
100 FashionMNIST
0 20k 40k # queries
0 20 40 60 80
100 CIFAR10
Posteriors Top-1 label
0 20k 40k # queries
0 20 40 60 80
100 CIFAR100
0 20k 40k # queries
0 20 40 60 80
100 CUBS200
0 20k 40k # queries
0 20 40 60 80
100 Caltech256
Figure A5: Stolen model trained using knockoff strategy on complete posterior information (y) and only the top-1 label of the posteriors (argmaxk yk).
0 10k 20k 30k 40k 50k Budget
0
20
40
60
80
100
M NI
ST -E
M NI
ST Le
tte rs
Te st
Ac cu
ra cy
= 0.1
MAD Undefended
Defender Attacker
0 10k 20k 30k 40k 50k Budget
= 0.5
0 10k 20k 30k 40k 50k Budget
= 0.99
0 10k 20k 30k 40k 50k Budget
= 1.1
0 10k 20k 30k 40k 50k Budget
0
20
40
60
80
100
Fa sh
ion M
NI ST
-E M
NI ST
Te st
Ac cu
ra cy
= 0.1
0 10k 20k 30k 40k 50k Budget
= 0.5
0 10k 20k 30k 40k 50k Budget
= 0.99
0 10k 20k 30k 40k 50k Budget
= 1.1
0 10k 20k 30k 40k 50k Budget
0
20
40
60
80
100
CI FA
R1 0-
CI FA
R1 00
Te st
Ac cu
ra cy
= 0.1
0 10k 20k 30k 40k 50k Budget
= 0.5
0 10k 20k 30k 40k 50k Budget
= 0.99
0 10k 20k 30k 40k 50k Budget
= 1.1
Figure A6: Budget vs. Test Accuracy. Supplements Fig. 3c in the main paper.
0 25 50 75 100 Acc(Attacker) ↓
0
20
40
60
80
100
Ac c(
De fe
nd er
) ↑
MNIST-EMNISTLetters
0 25 50 75 100 Acc(Attacker) ↓
0
20
40
60
80
100
Ac c(
De fe
nd er
) ↑
FashionMNIST-EMNIST
0 25 50 75 100 Acc(Attacker) ↓
0
20
40
60
80
100
Ac c(
De fe
nd er
) ↑
CIFAR10-CIFAR100
MAD MAD-argmax random-noise reverse-sigmoid ideal
Figure A7: Attacker argmax. Supplements Fig. 4 in the main paper.
0 10k 20k 30k 40k Iterations N
0.0
0.02
0.04
0.06
0.08
0.1
0.12
te st-
los s(
at ta
ck er
)
MNIST-EMNISTLetters
0 10k 20k 30k 40k Iterations N
0.01
0.02
0.03
0.04
0.05
0.06
0.07
te st-
los s(
at ta
ck er
)
FashionMNIST-EMNIST
0 20k 40k Iterations N
0.0
0.02
0.04
0.06
0.08
0.1 0.12 te stlos s( at ta ck er )
CIFAR10-CIFAR100
‘ = 0.01 ‘ = 0.1 ‘ = 0.5 ‘ = 1.0 ‘ = 2.0
Black-box setting
0
20
40
60 80100 120
140
160
180 1 10 100 1k 10k 100k 1m
76.5
37.8
17.1
2.9 0.3
0.01 0.1 0.5 1.0 2.0
0
20
40
60 80100 120
140
160
180 1 10 100 1k 10k 100k 1m
81.0
34.5
17.5
3.6 0.4
0.01 0.1 0.5 1.0 2.0
0
20
40
60 80100 120
140
160
180 1 10 100 1k 10k 100k 1m
91.3
37.0
18.8
3.5 0.3
0.01 0.1 0.5 1.0 2.0
0 20 40 Epoch
10≠3
10≠2
10≠1
100
te st-
los s(
at ta
ck er
)
MNIST
0 20 40 Epoch
10≠1
te st-
los s(
at ta
ck er
)
FashionMNIST
0 2 40 Epoch
10≠2
10≠1
100
te st-
los s(
at ta
ck er
)
CIFAR10
‘ = 0.01 ‘ = 0.1 ‘ = 0.5 ‘ = 1.0 ‘ = 2.0
MNIST FashionMNIST CIFAR10
Figure A8: Histogram of Angular Deviations (Black-box setting). Supplements Fig. 6 in the main paper. The test-loss during of the attacker model for each of the histograms (over multiple values) are provided in the bottom row.
F.3 ATTACKER ARGMAX
In Figure A7, we perform the non-replicability vs. utility evaluation (complementing Fig. 5 in the main paper) under a special situation: the attacker discards the probabilities and only uses the top-1 ‘argmax’ label to train the stolen model. Relevant discussion can be found in Section 5.2.2.
F.4 BLACK-BOX ANGULAR DEVIATIONS
In Figure A8, we provide the angular deviations obtained in a black-box setting over the course of training the attack model. We train the attacker model using the transfer set obtained by the knockoff approach (the strongest attacker in our experiments) for 50 epochs using a SGD (lr = 0.01, momentum = 0.5) and a batch size of 64. The experiment compliments our previous discussion in Section 5.2.3 of the main paper under “How much angular deviation does MAD introduce?”. As before, we estimate the angular deviations as: θ = arccos u·a||u||||a|| , where u = ∇wL(wt,y, ·) and a = ∇wL(wt, ỹ, ·). We observe from Figure A8: (i) the defensive angular deviations introduced by MAD to posterior predictions transfer to a black-box attacker setting, when crafting perturbations without access to the adversary’s model parameters; and (ii) although the setting introduces lower angular deviations at the extreme case of =2.0 (e.g., 114.7◦ → 76.5◦ in CIFAR10), we observe the perturbation sufficient to maximize the attacker’s test loss. We find significant angular deviations introduced by our approach in a black-box setting as well.
F.5 MAD ABLATION EXPERIMENTS
We present the ablation experiments covering all defender models in Figure A9. Relevant discussion is available in Section 5.2.3 of the main paper under “Ablative Analysis”.
40 60 80 100 Acc(Attacker) ↓
0.0
0.2
0.5
0.8
1.0
1.2
||y − ỹ || 1 ↓
MNIST-EMNISTLetters · B =50K
20 40 60 Acc(Attacker) ↓
0.0
0.2
0.5
0.8
1.0
1.2
||y − ỹ || 1 ↓
FashionMNIST-EMNIST · B =50K
40 60 80 Acc(Attacker) ↓
0.0
0.2
0.5
0.8
1.0
1.2 1.5 ||y − ỹ || 1 ↓
CIFAR10-CIFAR100 · B =50K
MAD MAD-argmax MAD-relax G = I y∗=rand ideal
0 25 50 75 100 Acc(Attacker) ↓
0
20
40
60
80
100
Ac c(
De fe
nd er
) ↑
0 25 50 75 100 Acc(Attacker) ↓
0
20
40
60
80
100
Ac c(
De fe
nd er
) ↑
0 25 50 75 100 Acc(Attacker) ↓
0
20
40
60
80
100
Ac c(
De fe
nd er
) ↑
Figure A9: MAD ablation experiments. Supplements Fig. 8 in the main paper. | 1. What is the focus of the paper, and what are the proposed methods for defending against stealing attacks?
2. What are the strengths of the paper, particularly regarding its clarity and experimental results?
3. What are the weaknesses of the paper, especially concerning the threat model and the performance gap between the proposed method and random perturbation?
4. How does the reviewer assess the feasibility of the optimization procedure and its potential impact on the time required to return query outputs?
5. What additional information or explanations does the reviewer request regarding the attacks and their testing process?
6. Does the reviewer have any concerns or suggestions regarding the transferability of the proposed defense to a black-box attacker? | Review | Review
The paper proposes a new method for defending against stealing attacks.
Positives:
1) The paper was very readable and clear.
2) The proposed method is straightforward and well motivated.
3) The authors included a good amount of experimental results.
Concerns:
1) You note that the random perturbation to the outputs performs poorly compared to your method, but this performance gap seems to decrease as the dataset becomes more difficult (i.e. CIFAR100). I’m concerned that this may indicate that the attackers are generally weak and this threat model may not be very serious. Overall, I’m skeptical of this threat model - the attackers require a very large number of queries, and don’t achieve great results on difficult datasets. Including results on a dataset like ImageNet would be nice.
2) How long does this optimization procedure take? It seems possibly unreasonable for the victim to implement this defense if it significantly lengthens the time to return outputs of queries.
3) Although this is a defense paper, it would be nice if the attacks were explained a bit more. Specifically, how are these attacks tested? You use the validation set, but does the attacker have knowledge about the class-label space of the victim? If the attacker trained with some synthetic data/other dataset, do you then freeze the feature extractor and train a linear layer to validate on the victim’s test set? It seems like this is discussed in the context of the victim in the “Attack Models” subsection, but it’s unclear what’s happening with the attacker.
4) It would be nice to see an angular histogram plot for a model where the perturbed labels were not crafted with knowledge of this model’s parameters - i.e. transfer the proposed defense to a blackbox attacker and produce this same plot. This would motivate the defense more. |
ICLR | Title
Prediction Poisoning: Towards Defenses Against DNN Model Stealing Attacks
Abstract
High-performance Deep Neural Networks (DNNs) are increasingly deployed in many real-world applications e.g., cloud prediction APIs. Recent advances in model functionality stealing attacks via black-box access (i.e., inputs in, predictions out) threaten the business model of such applications, which require a lot of time, money, and effort to develop. Existing defenses take a passive role against stealing attacks, such as by truncating predicted information. We find such passive defenses ineffective against DNN stealing attacks. In this paper, we propose the first defense which actively perturbs predictions targeted at poisoning the training objective of the attacker. We find our defense effective across a wide range of challenging datasets and DNN model stealing attacks, and additionally outperforms existing defenses. Our defense is the first that can withstand highly accurate model stealing attacks for tens of thousands of queries, amplifying the attacker’s error rate up to a factor of 85×with minimal impact on the utility for benign users.
1 INTRODUCTION
Effectiveness of state-of-the-art DNN models at a variety of predictive tasks has encouraged their usage in a variety of real-world applications e.g., home assistants, autonomous vehicles, commercial cloud APIs. Models in such applications are valuable intellectual property of their creators, as developing them for commercial use is a product of intense labour and monetary effort. Hence, it is vital to preemptively identify and control threats from an adversarial lens focused at such models. In this work we address model stealing, which involves an adversary attempting to counterfeit the functionality of a target victim ML model by exploiting black-box access (query inputs in, posterior predictions out).
Stealing attacks dates back to Lowd & Meek (2005), who addressed reverse-engineering linear spam classification models. Recent literature predominantly focus on DNNs (specifically CNN image classifiers), and are shown to be highly effective (Tramèr et al., 2016) on complex models (Orekondy et al., 2019), even without knowledge of the victim’s architecture (Papernot et al., 2017b) nor the training data distribution. The attacks have also been shown to be highly effective at replicating pay-per-query image prediction APIs, for as little as $30 (Orekondy et al., 2019).
Defending against stealing attacks however has received little attention and is lacking. Existing defense strategies aim to either detect stealing query patterns (Juuti et al., 2019), or degrade quality of predicted posterior via perturbation. Since detection makes strong assumptions on the attacker’s query distribution (e.g., small L2 distances between successive queries), our focus is on the more popular perturbation-based defenses. A common theme among such defenses is accuracypreserving posterior perturbation: the posterior distribution is manipulated while retaining the top-1 label. For instance, rounding decimals (Tramèr et al., 2016), revealing only high-confidence predictions (Orekondy et al., 2019), and introducing ambiguity at the tail end of the posterior distribution (Lee et al., 2018). Such strategies benefit from preserving the accuracy metric of the defender. However, in line with previous works (Tramèr et al., 2016; Orekondy et al., 2019; Lee et al., 2018), we find models can be effectively stolen using just the top-1 predicted label returned by the black-box. Specifically, in many cases we observe <1% difference between attacks that use the full range of
posteriors (blue line in Fig. 1) to train stolen models and the top-1 label (orange line) alone. In this paper, we work towards effective defenses (red line in Fig. 1) against DNN stealing attacks with minimal impact to defender’s accuracy.
The main insight to our approach is that unlike a benign user, a model stealing attacker additionally uses the predictions to train a replica model. By introducing controlled perturbations to predictions, our approach targets poisoning the training objective (see Fig. 2). Our approach allows for a utility-preserving defense, as well as trading-off a marginal utility cost to significantly degrade attacker’s performance. As a practical benefit, the defense involves a single hyperparameter (perturbation utility budget) and can be used with minimal overhead to any classification model without retraining or modifications.
We rigorously evaluate our approach by defending six victim models, against four recent and effective DNN stealing attack strategies (Papernot et al., 2017b; Juuti et al., 2019; Orekondy et al., 2019). Our defense consistently mitigates all stealing attacks and further shows improvements over multiple baselines. In particular, we find our defenses degrades the attacker’s query sample efficiency by 1-2 orders of magnitude. Our approach significantly reduces the attacker’s performance (e.g., 30-53% reduction on MNIST and 13- 28% on CUB200) at a marginal cost (1-2%) to defender’s test accuracy. Furthermore, our approach can achieve the same level of mitigation as baseline defenses, but by introducing significantly lesser perturbation.
Contributions. (i) We propose the first utility-constrained defense against DNN model stealing attacks; (ii) We present the first active defense which poisons the attacker’s training objective by introducing bounded perturbations; and (iii) Through extensive experiments, we find our approach consistently mitigate various attacks and additionally outperform baselines.
2 RELATED LITERATURE
Model stealing attacks (also referred to as ‘extraction’ or ‘reverse-engineering’) in literature aim to infer hyperparameters (Oh et al., 2018; Wang & Gong, 2018), recover exact parameters (Lowd & Meek, 2005; Tramèr et al., 2016; Milli et al., 2018), or extract the functionality (Correia-Silva et al., 2018; Orekondy et al., 2019) of a target black-box ML model. In some cases, the extracted model information is optionally used to perform evasion attacks (Lowd & Meek, 2005; Nelson et al., 2010; Papernot et al., 2017b). The focus of our work is model functionality stealing, where the attacker’s yardstick is test-set accuracy of the stolen model. Initial works on stealing simple linear models (Lowd & Meek, 2005) have been recently succeeded by attacks shown to be effective on complex CNNs (Papernot et al., 2017b; Correia-Silva et al., 2018; Orekondy et al., 2019) (see Appendix B for an exhaustive list). In this work, we works towards defenses targeting the latter line of DNN model stealing attacks.
Since ML models are often deployed in untrusted environments, a long line of work exists on guaranteeing certain (often orthogonal) properties to safeguard against malicious users. The properties include security (e.g., robustness towards adversarial evasion attacks (Biggio et al., 2013; Goodfellow et al., 2014; Madry et al., 2018)) and integrity (e.g., running in untrusted environments (Tramer & Boneh, 2019)). To prevent leakage of private attributes (e.g., identities) specific to training data in the resulting ML model, differential privacy (DP) methods (Dwork et al., 2014) introduce randomization during training (Abadi et al., 2016; Papernot et al., 2017a). In contrast, our defense objective is to provide confidentiality and protect the functionality (intellectual property) of the ML model against illicit duplication.
Model stealing defenses are limited. Existing works (which is primarily in multiclass classification settings) aim to either detect stealing attacks (Juuti et al., 2019; Kesarwani et al., 2018; Nelson et al., 2009; Zheng et al., 2019) or perturb the posterior prediction. We focus on the latter since detection involves making strong assumptions on adversarial query patterns. Perturbation-based defenses are predominantly non-randomized and accuracy-preserving (i.e., top-1 label is unchanged). Approaches include revealing probabilities only of confident classes (Orekondy et al., 2019), rounding probabilities (Tramèr et al., 2016), or introducing ambiguity in posteriors (Lee et al., 2018). None of the existing defenses claim to mitigate model stealing, but rather they only marginally delay the attack by increasing the number of queries. Our work focuses on presenting an effective defense, significantly decreasing the attacker’s query sample efficiency within a principled utility-constrained framework.
3 PRELIMINARIES
Model Functionality Stealing. Model stealing attacks are cast as an interaction between two parties: a victim/defender V (‘teacher’ model) and an attacker A (‘student’ model). The only means of communication between the parties are via black-box queries: attacker queries inputs x ∈ X and defender returns a posterior probability distribution y ∈ ∆K = P (y|x) = FV (x), where ∆K = {y 0,1Ty = 1} is the probability simplex over K classes (we use K instead of K − 1 for notational convenience). The attack occurs in two (sometimes overlapping) phases: (i) querying: the attacker uses the black-box as an oracle labeler on a set of inputs to construct a ‘transfer set’ of input-prediction pairs Dtransfer = {(xi,yi)}Bi=1; and (ii) training: the attacker trains a model FA to minimize the empirical risk on Dtransfer. The end-goal of the attacker is to maximize accuracy on a held-out test-set (considered the same as that of the victim for evaluation purposes).
Knowledge-limited Attacker. In model stealing, attackers justifiably lack complete knowledge of the victim model FV . Of specific interest are the model architecture and the input data distribution to train the victim model PV (X) that are not known to the attacker. Since prior work (Hinton et al., 2015; Papernot et al., 2016; Orekondy et al., 2019) indicates functionality largely transfers across architecture choices, we now focus on the query data used by the attacker. Existing attacks can be broadly categorized based on inputs {x ∼ PA(X)} used to query the black-box: (a) independent distribution: (Tramèr et al., 2016; Correia-Silva et al., 2018; Orekondy et al., 2019) samples inputs from some distribution (e.g., ImageNet for images, uniform noise) independent to input data used to train the victim model; and (b) synthetic set: (Papernot et al., 2017b; Juuti et al., 2019) augment a limited set of seed data by adaptively querying perturbations (e.g., using FGSM) of existing inputs. We address both attack categories in our paper.
Defense Objectives. We perturb predictions in a controlled setting: ỹ = F δV (x) = y + δ s.t. ỹ,y ∈ ∆K . The defender has two (seemingly conflicting) objectives: (i) utility: such that perturbed predictions remain useful to a benign user. We consider two utility measures: (a) Acc(F δV ,Dtest): accuracy of defended model on test examples; and (b) dist(y, ỹ) = ||y− ỹ||p = to measure perturbation. (ii) non-replicability: to reduce the test accuracy of an attacker (denoted as Acc(FA,Dtest)) who exploits the predictions to train a replica FA on Dtransfer. For consistency, we evaluate both the defender’s and attacker’s stolen model accuracies on the same set of test examples Dtest.
Defender’s Assumptions. We closely mimic an assumption-free scenario similar to existing perturbation-based defenses. The scenario entails the knowledge-limited defender: (a) unaware whether a query is malicious or benign; (b) lacking prior knowledge of the strategy used by an attacker; and (c) perturbing each prediction independently (hence circumventing Sybil attacks). For added rigor, we also study attacker’s countermeasures to our defense in Section 5.
4 APPROACH: MAXIMIZING ANGULAR DEVIATION BETWEEN GRADIENTS
Motivation: Targeting First-order Approximations. We identify that the attacker eventually optimizes parameters of a stolen model F (·;w) (we drop the subscript ·A for readability) to minimize the loss on training examples {(xi, ỹi)}. Common to a majority of optimization algorithms is estimating the first-order approximation of the empirical loss, by computing the gradient of the loss
w.r.t. the model parameters w ∈ RD: u = −∇wL(F (x;w),y) (1)
Maximizing Angular Deviation (MAD). The core idea of our approach is to perturb the posterior probabilities y which results in an adversarial gradient signal that maximally deviates (see Fig. 2) from the original gradient (Eq. 1). More formally, we add targeted noise to the posteriors which results in a gradient direction:
a = −∇wL(F (x;w), ỹ) (2) to maximize the angular deviation between the original and the poisoned gradient signals:
max a 2(1− cos∠(a,u)) = max â ||â− û||22 (â = a/||a||2, û = u/||u||2) (3)
Given that the attacker model is trained to match the posterior predictions, such as by minimizing the cross-entropy loss L(y, ỹ) = − ∑ k ỹk log yk we rewrite Equation (2) as:
a = −∇wL(F (x;w), ỹ) = ∇w ∑ k ỹk logF (x;w)k = ∑ k ỹk∇w logF (x;w)k = GT ỹ
where G ∈ RK×D represents the Jacobian over log-likelihood predictions F (x;w) over K classes w.r.t. parameters w ∈ RD. By similarly rewriting Equation (1), substituting them in Equation (3) and including the constraints, we arrive at our poisoning objective (Eq. 4-7) of our approach which we refer to as MAD. We can optionally enforce preserving accuracy of poisoned prediction via constraint (8), which will be discussed shortly.
max ỹ ∥∥∥∥ GT ỹ||GT ỹ||2 − G Ty ||GTy||2 ∥∥∥∥2 2
(= H(ỹ)) (4)
where G = ∇w logF (x;w) (G ∈ RK×D) (5) s.t ỹ ∈ ∆K (Simplex constraint) (6)
dist(y, ỹ) ≤ (Utility constraint) (7) arg max
k ỹk = arg max k yk (For variant MAD-argmax) (8)
The above presents a challenge of black-box optimization problem for the defense since the defender justifiably lacks access to the attacker model F (Eq. 5). Apart from addressing this challenge in the next few paragraphs, we also discuss (a) solving a non-standard and non-convex constrained maximization objective; and (b) preserving accuracy of predictions via constraint (8).
Estimating G. Since we lack access to adversary’s model F , we estimate the jacobian G = ∇w logFsur(x;w) (Eq. 5) per input query x using a surrogate model Fsur. We empirically determined (details in Appendix E.1) choice of architecture of Fsur robust to choices of adversary’s architecture F . However, the initialization of Fsur plays a crucial role, with best results on a fixed randomly initialized model. We conjecture this occurs due to surrogate models with a high loss provide better gradient signals to guide the defender.
Heuristic Solver. Gradient-based strategies to optimize objective (Eq. 4) often leads to poor local maxima. This is in part due to the objective increasing in all directions around point y (assumingG is full-rank), making optimization sensitive to initialization. Consequently, we resort to a heuristic to solve for ỹ. Our approach is motivated by Hoffman (1981), who show that the maximum of a convex function over a compact convex set occurs at the extreme points of the set. Hence, our two-step solver: (i) searches for a maximizer y∗ for (4) by iterating over the K extremes yk (where yk=1) of the probability simplex ∆K ; and (ii) then computes a perturbed posterior ỹ as a linear interpolation of the original posteriors y and the maximizer y∗: ỹ = (1 − α)y + αy∗, where α is selected such that the utility constraint (Eq. 7) is satisfied. We further elaborate on the solver and present a pseudocode in Appendix C.
Variant: MAD-argmax. Within our defense formulation, we encode an additional constraint (Eq. 8) to preserve the accuracy of perturbed predictions. MAD-argmax variant helps us perform accuracy-preserving perturbations similar to prior work. But in contrast, the perturbations are constrained (Eq. 7) and are specifically introduced to maximize the MAD objective. We enforce the accuracy-preserving constraint in our solver by iterating over extremes of intersection of sets Eq.(6) and (8): ∆Kk = {y 0,1Ty = 1, yk ≥ yj , k 6= j} ⊆ ∆K .
5 EXPERIMENTAL RESULTS
5.1 EXPERIMENTAL SETUP
Victim Models and Datasets. We set up six victim models (see column ‘FV ’ in Table 1), each model trained on a popular image classification dataset. All models are trained using SGD (LR = 0.1) with momentum (0.5) for 30 (LeNet) or 100 epochs (VGG16), with a LR decay of 0.1 performed every 50 epochs. We train and evaluate each victim model on their respective train and test sets.
Attack Strategies. We hope to broadly address all DNN model stealing strategies during our defense evaluation. To achieve this, we consider attacks that vary in query data distributions (independent and synthetic; see Section 3) and strategies (random and adaptive). Specifically, in our experiments we use the following attack models: (i) Jacobian-based Data Augmentation ‘JBDA’ (Papernot et al., 2017b);
(ii,iii) ‘JB-self’ and ‘JB-top3’ (Juuti et al., 2019); and (iv) Knockoff Nets ‘knockoff’ (Orekondy et al., 2019); We follow the default configurations of the attacks where possible. A recap and implementation details of the attack models are available in Appendix D.
In all attack strategies, the adversary trains a model FA to minimize the cross-entropy loss on a transfer set (Dtransfer = {(xi, ỹi)}Bi=1) obtained by using the victim model FV to pseudo-label inputs xi (sampled or adaptively synthesized). By default, we use B=50K queries, which achieves reasonable performance for all attacks and additionally makes defense evaluation tractable. The size of the resulting transfer set (B=50K examples) is comparable (e.g., 1× for CIFAR10/100, 2.1× for Caltech256) to size of victim’s training set. In line with prior work (Papernot et al., 2016; Orekondy et al., 2019), we too find (Section 5.2.3) attack and defense performances are unaffected by choice of architectures, and hence use the victim architecture for the stolen model FA. Due to the complex parameterization of VGG-16 (100M+), we initialize the weights from a pretrained TinyImageNet or ImageNet model (except for the last FC layer, which is trained from scratch). All stolen models are trained using SGD (LR=0.1) with momentum (0.5) for 30 epochs (LeNet) and 100 epochs (VGG16). We find choices of attacker’s architecture and optimization does not undermine the defense (discussed in Section 5.2.3).
Effectiveness of Attacks. We evaluate accuracy of resulting stolen models from the attack strategies as-is on the victim’s test set, thereby allowing for a fair head-to-head comparison with the victim model (additional details in Appendix A and D). The stolen model test accuracies, along with undefended victim model FV accuracies are reported in Table 1. We observe for all six victim models, using just 50K black-box queries, attacks are able to significantly extract victim’s functionality e.g., >87% on MNIST. We find the knockoff attack to be the strongest, exhibiting reasonable performance even on complex victim models e.g., 74.6% (0.93×Acc(FV )) on Caltech256.
How Good are Existing Defenses? Most existing defenses in literature (Tramèr et al., 2016; Orekondy et al., 2019; Lee et al., 2018) perform some form of information truncation on the posterior probabilities e.g., rounding, returning top-k labels; all strategies preserve the rank of the most confident label. We now evaluate model stealing attacks on the extreme end of information truncation, wherein the defender returns just the top-1 ‘argmax’ label. This strategy illustrates a rough lower bound on the strength of the attacker when using existing defenses. Specific to knockoff, we observe the attacker is minimally impacted on simpler datasets (e.g., 0.2% accuracy drop on CIFAR10; see Fig. A5 in Appendix). While this has a larger impact on more complex datasets involving numerous classes (e.g., a maximum of 23.4% drop observed on CUB200), the strategy also introduces a significant perturbation (L1=1±0.5) to the posteriors. The results suggest existing defenses, which largely the top-1 label, are largely ineffective at mitigating model stealing attacks.
Defenses: Evaluation. We evaluate all defenses on a non-replicability vs. utility curve at various operating points of the defense. We furthermore evaluate the defenses for a large query budget (50K). We use as non-replicability the accuracy of the stolen model on held-out test data Dtest.
We use two utility metrics: (a) accuracy: test-accuracy of the defended model producing perturbed predictions on Dtest; and (b) perturbation magnitude : measured as L1 distance ||y − ỹ||1.
Defense: Baselines. We compare our approaches against three methods: (i) reverse-sigmoid (Lee et al., 2018): which softens the posterior distribution and introduces ambiguity among nonargmax probabilities. For this method, we evaluate non-replicability and utility metrics for the defense operating at various choices of their hyperparameter β ∈ [0, 1], while keeping their datasetspecific hyperparameter γ fixed (MNIST: 0.2, FashionMNIST: 0.4, CIFAR10: 0.1, rest: 0.2). (ii) random noise: For controlled random-noise, we add uniform random noise δz on the logit prediction scores (z̃ = z + δz , where z = log( y1−y )), enforce utility by projecting δz to an z-ball (Duchi et al., 2008), and renormalize probabilities ỹ = 1
1+e−z̃ . (iii) dp-sgd: while our method and
previous two baselines perturbs predictions, we also compare against introducing randomization to victim model parameters by training with the DP-SGD algorithm (Abadi et al., 2016). DP is a popular technique to protect the model against training data inference attacks. This baseline allows us to verify whether the same protection extends to model functionality.
5.2 RESULTS
In the follow sections, we demonstrate the effectiveness of our defense rigorously evaluated across a wide range of complex datasets, attack models, defense baselines, query, and utility budgets. For readability, we first evaluate the defense against attack models, proceed to comparing the defense against strong baselines and then provide an analysis of the defense.
5.2.1 MAD DEFENSE VS. ATTACKS
Figure 3 presents evaluation of our defenses MAD (Eq. 4-7) and MAD-argmax (Eq. 4-8) against the four attack models. To successfully mitigate attacks as a defender, we want the defense curves (colored solid lines with operating points denoted by thin crosses) to move away from undefended accuracies (denoted by circular discs, where =0.0) to ideal defense performances (cyan cross, where Acc(Def.) is unchanged and Acc(Att.) is chance-level). We observe from Figure 3 that by employing an identical defense across all datasets and attacks, the effectiveness of the attacker can be greatly reduced. Across all models, we find MAD provides reasonable operating points (above the diagonal), where defender achieves significantly higher test accuracies compared to the attacker. For instance, on MNIST, for <1% drop in defender’s accuracy, our defense simultaneously reduces accuracy of the jbtop3 attacker by 52% (87.3%→35.7%) and knockoff by 29% (99.1%→69.8%). We find similar promising results even on high-dimensional complex datasets e.g., on CUB200, a 23% (65.1%→41.9%) performance drop of knockoff for 2% drop in defender’s test performance. Our results indicate effective defenses are achievable, where the defender can trade-off a marginal utility cost to drastically impede the attacker.
5.2.2 MAD DEFENSE VS. BASELINE DEFENSES
We now study how our approach compares to baseline defenses, by evaluating the defenses against the knockoff attack (which resulted in the strongest attack in our experiments). From Figure 4, we observe:
Follow-up to Figure 4b (CIFAR10), but with attacker using only the argmax label.
(i) Utility objective = L1 distance (Fig. 4a): Although random-noise and reverse-sigmoid reduce attacker’s accuracy, the strategies in most cases involves larger perturbations. In contrast, MAD and MAD-argmax provides similar non-replicability (i.e., Acc(Att.)) with significantly lesser perturbation, especially at lower magnitudes. For instance, on MNIST (first column), MAD (L1 = 0.95) reduces the accuracy of the attacker to under 80% with 0.63× the perturbation as that of reversesigmoid and random-noise (L1 ≈ 1.5). (ii) Utility objective = argmax-preserving (Fig. 4b): By setting a hard constraint on retaining the label of the predictions, we find the accuracy-preserving defenses MAD-argmax and reverse-sigmoid successfully reduce the performance of the attacker by at least 20% across all datasets. In most cases, we find MAD-argmax in addition achieves this objective by introducing lesser distortion to the predictions compared to reverse-sigmoid. For instance, in Fig. 4a, we find MAD-argmax consistently reduce the attacker accuracy to the same amount at lesser L1 distances. In reversesigmoid, we attribute the large L1 perturbations to a shift in posteriors towards a uniform distribution e.g., mean entropy of perturbed predictions is 3.02 ± 0.16 (max-entropy = 3.32) at L1=1.0 for MNIST; in contrast, MAD-argmax displays a mean entropy of 1.79 ± 0.11. However, common to accuracy-preserving strategies is a pitfall that the top-1 label is retained. In Figure 5 (see overlapping red and yellow cross-marks), we present the results of training the attacker using only the top-1 label. In line with previous discussions, we find that the attacker is able to significantly recover the original performance of the stolen model for accuracy-preserving defenses MAD-argmax and reverse-sigmoid.
(iii) Non-replicability vs. utility trade-off (Fig. 4b): We now compare our defense MAD (blue lines) with baselines (rand-noise and dp-sgd) which trade-off utility to mitigate model stealing. Our results indicate MAD offers a better defense (lower attacker accuracies for similar defender accuracies). For instance, to reduce the attacker’s accuracy to <70%, while the defender’s accuracy significantly degrades using dp-sgd (39%) and rand-noise (56.4%), MAD involves a marginal decrease of 1%.
40 50 60 70 80 90 100 Acc(Attacker) ↓
0.0
0.2
0.4
0.6
0.8
1.0
1.2
1.4
||y − ỹ || 1 ↓
MNIST
0 20 40 60 80 100 Acc(Attacker) ↓
0
20
40
60
80
100
Ac c(
De fe
nd er
) ↑
MNIST
MAD MAD-argmax G = I y∗=rand ideal
Figure 8: MAD Ablation experiments. Utility = (left) L1 distance (right) defender test accuracy.
5.2.3 ANALYSIS
How much angular deviation does MAD introduce? To obtain insights on the angular deviation induced between the true and the perturbed gradient, we conduct an experiment by tracking the true gradient direction (which was unknown so far) at each training step. We simulate this by training an attacker model using online SGD (LR=0.001) over N iterations using B distinct images to query and a batch size of 1. At each step t of training, the attacker queries a randomly sampled input xt to the defender model and backpropogates the loss resulting from ỹt. In this particular experiment, the perturbation ỹt is crafted having exact knowledge of the attacker’s parameters. We evaluate the angular deviation between gradients with (a) and without (u) the perturbation.
In Figure 6, we visualize a histogram of deviations: θ = arccos u·a||u||||a|| , where u = ∇wL(wt,y, ·) and a = ∇wL(wt, ỹ, ·). We observe: (i) although our perturbation space is severely restricted (a low-dimensional probability simplex), we can introduce surprisingly high deviations (0-115◦) in the high-dimensional parameter space of the VGG16; (ii) for values at reasonable operating points which preserves the defender’s accuracy within 10% of the undefended accuracy (e.g., ∈ [0.95, 0.99] for CIFAR10), we see deviations with mean 24.9◦ (yellow bars in Fig. 6). This indicates that the perturbed gradient on an average leads to a slower decrease in loss function; (iii) on the extreme end, with = max = 2, on an average, we find the perturbations successfully flips (>90◦) the gradient direction leading to an increase on the test loss, as seen in Figure 7 (blue line). We also find the above observations reasonably transfers to a black-box attacker setting (see Appendix F.4), where the perturbations are crafted without knowledge of the attacker’s parameters. Overall, we find our approach considerably corrupts the attacker’s gradient direction.
Ablative Analysis. We present an ablation analysis of our approach in Figure 8. In this experiment, we compare our approach MAD and MAD-argmax to: (a) G = I: We substitute the jacobian G (Eq. 5) with a K ×K identity matrix; and (b) y∗=rand: Inner maximization term (Eq. 4) returns a random extreme of the simplex. Note that both (a) and (b) do not use the gradient information to perturb the posteriors.
From Figure 8, we observe: (i) poor performance of y∗=rand, indicating random untargeted perturbations of the posterior probability is a poor strategy; (ii) G = I , where the angular deviation is maximized between the posterior probability vectors is a slightly better strategy; (ii) MAD outperforms the above approaches. Consequently, we find using the gradient information (although a proxy to the attacker’s gradient signal) within our formulation (Equation 4) is crucial to providing better model stealing defenses.
Subverting the Defense. We now explore various strategies an attacker can use to circumvent the defense. To this end, we evaluate the following strategies: (a) argmax: attacker uses only the most-confident label during training; (b) arch-*: attacker trains other choices of architectures; (c) nquery: attacker queries each image multiple times; (d) nquery+aug: same as (c), but with random cropping and horizontal flipping; and (e) opt-*: attacker uses an adaptive LR optimizer e.g., ADAM (Kingma & Ba, 2014).
We present results over the subversion strategies in Figure 9. We find our defense robust to above strategies. Our results indicate that the best strategy for the attacker to circumvent our defense
is to discard the probabilities and rely only on the most confident label to train the stolen model. In accuracy-preserving defenses (see Fig. 5), this previously resulted in an adversary entirely circumventing the defense (recovering up to 1.0× original performance). In contrast, we find MAD is nonetheless effective in spite of the strategy, maintaining a 9% absolute accuracy reduction in attacker’s stolen performance.
6 CONCLUSION
In this work, we were motivated by limited success of existing defenses against DNN model stealing attacks. While prior work is largely based on passive defenses focusing on information truncation, we proposed the first active defense strategy that attacks the adversary’s training objective. We found our approach effective in defending a variety of victim models and against various attack strategies. In particular, we find our attack can reduce the accuracy of the adversary by up to 65%, without significantly affecting defender’s accuracy.
Acknowledgement. This research was partially supported by the German Research Foundation (DFG CRC 1223). We thank Paul Swoboda and David Stutz for helpful discussions.
Appendix
A OVERVIEW AND NOTATION
A
V collect$$$ train +defense
trainbuild
Utility Metric 2
Non-Replicability MetricUtility Metric 1
Victim/Defender
Adversary
Figure A1: Overview of Attack, Defense, and Evaluation Metrics. We consider an attacker A who exploits black-box access to defended model F δV to train a stolen model FA. In this paper, we take the role of the defender who intends to minimize replicability (i.e., Acc(FA,Dtest)), while maintaining utility of the predictions. We consider two notions of utility: (1) minimizing perturbations in predictions, measured here using L1 distance; and (2) maintaining accuracy of the defended model on test set Acc(F δV ,Dtest). Note that for a fair head-to-head comparison, we use the same held-out test set Dtest to evaluate accuracies of both the defended model F δV and stolen model FA. Similar to all prior work, we assumeDtrain,Dtest are drawn i.i.d from the same (victim) distribution DV . Notation used in the above figure is further elaborated in Table A1.
x Inputs (images ∈ RC×H×W ) y, ỹ Original, perturbed posterior predictions ∆K Probability simplex over K vertices
Attacker A PA(X) Attacker’s input data distribution Dtransfer Transfer set (= {(xi,yi)}, where xi ∼ PA(X), yi = FV (xi)) FA Attacker’s (stolen) model trained on Dtransfer
Victim/Defender V PV (X) Victim’s input data distribution Dtrain Training data (= {(xi,yi)}, where xi ∼ PV (X)) FV Undefended model trained on Dtrain F δV Defended model Dtest Test set (= {(xi,yi)}, where xi ∼ PV (X))
Table A1: Notation
B RELATED WORK: EXTENSION
A summary of existing model stealing attacks and defenses is presented in Table A2.
C DETAILED ALGORITHM
We present a detailed algorithm (see Algorithm 1) for our approach described in Section 4.
The algorithm roughly follows four steps:
(i) Predict (L2): Obtains posterior probability predictions y for input x using a victim model FV (x;wV ).
Black-box type Proposed Attack Proposed Defense
Input Query Data Adapt.? Strategy P/D? AP? AC
1. Lowd & Meek (2005) Linear Random Noise 3 - - - - 2. Nelson et al. (2009) Linear Labeled Data 7 Rejection D 7 1 3. Nelson et al. (2010) Linear Random Noise 3 - - - - 4. Alabdulmohsin et al. (2014) Linear Random Noise 3 Ensembling P 7 4 5. Tramèr et al. (2016) Linear, NN Random Noise † Rounding P 3 5 6. Milli et al. (2018) Linear, NN Random Noise 3 - - - - 7. Kesarwani et al. (2018) Decision Tree - - Detection D 3 5 8. Chandrasekaran et al. (2019) Linear Random Noise 3 Random Pert. P 7 -
9. Papernot et al. (2017b) CNN Synth. Data 3 - - - - 10. Correia-Silva et al. (2018) CNN Unlabeled Data 7 - - - - 11. Pal et al. (2019) CNN Unlabeled Data † - - - - 12. Orekondy et al. (2019) CNN* Unlabeled Data † Rounding, Top-k P 3 12 13. Jagielski et al. (2019) CNN* Unlabeled Data 3 - - - -
14. Juuti et al. (2019) CNN Synth. Data 3 Detection D 3 9,14 15. Lee et al. (2018) CNN - - Reverse sigmoid P 3 9 16. Ours CNN* - - Targeted Pert. P † 9,12,14
Table A2: Existing DNN Attacks and Defenses. Complements the discussion in Section 2. ‘CNN∗’: Complex ImageNet-like CNN. ‘†’: Both. ‘P/D’: Perturbation/Detection. ‘AP’: Accuracy preserving (i.e., maintains top-1 labels of predictions). ‘AC’: Attacks considered.
(ii) Estimate JacobianG (L3): We estimate a RK×D jacobian matrix on a surrogate model F . By default, we use as F a randomly initialized model (more details in Appendix E.1). Each row of G represents the gradient direction (in parameter space RD) over log likelihood of class k.
(iii) Maximize MAD Objective (L4): We find the optimal direction y∗ which maximizes the MAD objective (Eq. 3). To compute the arg max, we iterative over the K extremes of the probability simplex ∆K to find y∗ which maximizes the objective. The extreme yk denotes a probability vector with yk = 1.
(iv) Enforce Utility Constraint (L5-7): We enforce the perturbation utility constraint (Eq. 7) by considering a linear interpolation of y∗ and y. The resulting interpolation probability vector ỹ := h(α∗) represents the utility-constrained perturbed prediction that is returned instead of y.
1 Function PerturbedPredict-MAD(x): Input: Input data x, model to defend FV (·; wV ), proxy attacker model F (·; w) Output: Perturbed posterior probability ỹ ∈ ∆K s.t. dist(ỹ,y) ≤
2 y := FV (x; wV ) // Obtain K-dim posteriors 3 G := ∇w logF (x; w) // Pre-compute (K x D) Jacobian
4 y∗ := arg maxyk∈ext(∆K) ∥∥∥ GTyk||GTyk||2 − GTy||GTy||2 ∥∥∥22 // Alternatively ext(∆Kk ) for MAD-argmax 5 Define h(α) = (1− α)y + αy∗ 6 α∗ := arg maxα∈[0,1],dist(·)≤ dist(h(α), y ∗) // Find optimal step-size via
bisection, or OptStep(.) for Lp norms 7 ỹ := h(α∗) // Perturbed probabilities 8 return ỹ 9
10 Function OptStep(y,y∗, , p): 11 α∗ := max
{ ||y−y∗||p , 1 }
12 return α∗
Algorithm 1: MAD Defense. To supplement approach in Section 4
D ATTACK MODELS: RECAP AND IMPLEMENTATION DETAILS
Jacobian Based Data Augmentation (jbda) (Papernot et al., 2017b). The motivation of the approach is to obtain a surrogate of the victim black-box classifier, with an end-goal of performing evasion attacks (Biggio et al., 2013; Goodfellow et al., 2014). We restrict discussions primarily to the first part of constructing the surrogate. To obtain the surrogate (the stolen model), the authors depend on an unlabeled ‘seed’ set, typically from the same distribution as that used to train the victim model. As a result, the attacker assumes (mild) knowledge of the input data distribution and the class-label of the victim.
The key idea behind the approach is to query perturbations of inputs, to obtain a reasonable approximation of the decision boundary of the victim model. The attack strategy involves performing the following steps in a repeated manner: (i) images from the substitute set (initially the seed) D is labeled by querying the victim model FV as an oracle labeler; (ii) the surrogate model FA is trained on the substitute dataset; (iii) the substitute set is augmented using perturbations of existing images: Dρ+1 = Dρ ∪ {x+ λρ+1 · sgn(JF [FA(x)]) : x ∈ Dρ}, where J is the jacobian function. We use a seed set of: 100 (MNIST and FashionMNIST), 500 (CIFAR10, CUB200, Caltech256) and 1000 (CIFAR100). We use the default set of hyperparameters of Papernot et al. (2017b) in other respects.
Jacobian Based {self, top-k} (jbself, jbtop3) (Juuti et al., 2019) . The authors generalize the above approach, by extending the manner in which the synthetic samples are produced. In jbself, the jacobian is calculated w.r.t to k nearest classes and in jb-self, w.r.t the maximum a posterior class predicted by FA.
Knockoff Nets (knockoff) (Orekondy et al., 2019) . Knockoff is a recent attack model, which demonstrated model stealing can be performed without access to seed samples. Rather, the queries to the black-box involve natural images (which can be unrelated to the training data of the victim model) sampled from a large independent data source e.g., ImageNet1K. Consequently, no knowledge of the input data distribution nor the class-label space of the victim model is required to perform model stealing. The paper proposes two strategies on how to sample images to query: random and adaptive. We use the random strategy in the paper, since adaptive resulted in marginal increases in an open-world setup (which we have).
As the independent data sources in our knockoff attacks, we use: EMNIST-Letters (when stealing MNIST victim model), EMNIST (FashionMNIST), CIFAR100 (CIFAR10), CIFAR10 (CIFAR100), ImageNet1k (CUB200, Caltech256). Overlap between query images and the training data of the victim models are purely co-incidental.
We use the code from the project’s public github repository.
Evaluating Attacks. The resulting replica model FA from all the above attack strategies are evaluated on a held-out test set. We remark that the replica model is evaluated as-is, without additional finetuning or modifications. Similar to prior work, we evaluate the accuracies of FA on the victim’s held-out test set. Evaluating both stolen and the victim model on the same test set allows for fair head-to-head comparison.
E SUPPLEMENTARY ANALYSIS
In this section, we present additional analysis to supplement Section 5.2.3.
E.1 ESTIMATING G
Central to our defense is estimating the jacobian matrix G = ∇w logF (x;w) (Eq. 5), where F (·;w) is the attacker’s model. However, a defender with black-box attacker knowledge (where F is unknown) requires determining G by instead using a surrogate model Fsur. We determine choice of Fsur empirically by studying two factors: (a) architecture of Fsur: choice of defender’s surrogate architecture robust to varying attacker architectures (see Fig. A2); and (b) initialization of Fsur: initialization of the surrogate model parameters plays a crucial role in providing a better defense. We
0 25 50 75 100 Acc(Attacker) ↓
0
20
40
60
80
100
Ac c(
De fe
nd er
) ↑
CIFAR10 · Fsur = VGG16
FA VGG16 ResNet34 Densenet
Figure A2: Influence of attacker architecture choices on a fixed surrogate.
0 20 40 60 80 100 Acc(Attacker) ↓
0
20
40
60
80
100
Ac c(
De fe
nd er
) ↑
MNIST
0 20 40 60 80 100 Acc(Attacker) ↓
0
20
40
60
80
100 FashionMNIST
rand early mid late ideal defense
0 20 40 60 80 100 Acc(Attacker) ↓
0
20
40
60
80
100 CIFAR10
Figure A3: Influence of Initialization of a VGG16 Surrogate Model. ‘rand’ = random initialization, (‘early’, ’mid’, ’late’) = ∼(25, 50, 75)% test accuracy of surrogate on test set.
Undefended MAD MNIST 0.88 ± 14.41 6.47 ± 12.25 FashionMNIST 0.89 ± 15.76 6.65 ± 14.16 CIFAR10 1.93 ± 13.02 8.58 ± 15.02 CIFAR100 2.15 ± 18.82 69.26 ± 21.4 CUBS200 4.45 ± 9.66 446.93 ± 23.87 Caltech256 4.93 ± 21.25 815.97 ± 30.3
Table A3: Run times (in ms). We report the mean and standard deviation of predictions of undefended and defended models, computed over 10K predictions.
consider four choices of initialization: {‘rand’, ‘early’, ‘mid’, ‘late’} which exhibits approximately {chance-level 25%, 50%, 75%} test accuracies respectively. We observe (see Fig. A3) that a randomly initialized model, which is far from convergence, provides better gradient signals in crafting perturbations.
E.2 RUN-TIME ANALYSIS
We present the run-times of our defended and undefended models in Table A3. The reported numbers were summarized over 10K unique predictions performed on an Nvidia Tesla V100. We find our optimization procedure Eq. (4-7) for all models take under a second, with at most 0.8s in the case of Caltech256. The primary computational bottleneck of our defense implementation is estimating matrixG ∈ RK×D in Eq. 5, which currently requires performing K (i.e., number of output classes) backward passes through the surrogate model. Consequently, we find that our inference times on Caltech256 can be further reduced to 0.3s ± 0.04 by using a more efficient surrogate architecture (e.g., ResNet-34).
F ADDITIONAL PLOTS
F.1 ATTACKER EVALUATION
We present evaluation of all attacks considered in the paper on an undefended model in Figure A4. Furthermore, specific to the knockoff attack, we analyze how training using only the top-1 label (instead of complete posterior information) affects the attacker in Figure A5.
F.2 BUDGET VS. ACCURACY
We plot the budget (i.e., number of distinct black-box attack queries to the defender) vs. the test accuracy of the defender/attacker in Figure A6. The figure supplements Figure 1 and the discussion found in Section 5.2.1 of the main paper.
0 20k 40k # queries
0 20 40 60 80 100 Ac cu ra cy (A tta ck er )
MNIST
0 20k 40k # queries
0 20 40 60 80
100 FashionMNIST
0 20k 40k # queries
0 20 40 60 80
100 CIFAR10
jbda jbself jbtop3 knockoff
0 20k 40k # queries
0 20 40 60 80
100 CIFAR100
0 20k 40k # queries
0 20 40 60 80
100 CUBS200
0 20k 40k # queries
0 20 40 60 80
100 Caltech256
Figure A4: Evaluation of all attacks on undefended victim models.
0 20k 40k # queries
0 20 40 60 80
100
Ac cu
ra cy
(A tta
ck er
)
MNIST
0 20k 40k # queries
0 20 40 60 80
100 FashionMNIST
0 20k 40k # queries
0 20 40 60 80
100 CIFAR10
Posteriors Top-1 label
0 20k 40k # queries
0 20 40 60 80
100 CIFAR100
0 20k 40k # queries
0 20 40 60 80
100 CUBS200
0 20k 40k # queries
0 20 40 60 80
100 Caltech256
Figure A5: Stolen model trained using knockoff strategy on complete posterior information (y) and only the top-1 label of the posteriors (argmaxk yk).
0 10k 20k 30k 40k 50k Budget
0
20
40
60
80
100
M NI
ST -E
M NI
ST Le
tte rs
Te st
Ac cu
ra cy
= 0.1
MAD Undefended
Defender Attacker
0 10k 20k 30k 40k 50k Budget
= 0.5
0 10k 20k 30k 40k 50k Budget
= 0.99
0 10k 20k 30k 40k 50k Budget
= 1.1
0 10k 20k 30k 40k 50k Budget
0
20
40
60
80
100
Fa sh
ion M
NI ST
-E M
NI ST
Te st
Ac cu
ra cy
= 0.1
0 10k 20k 30k 40k 50k Budget
= 0.5
0 10k 20k 30k 40k 50k Budget
= 0.99
0 10k 20k 30k 40k 50k Budget
= 1.1
0 10k 20k 30k 40k 50k Budget
0
20
40
60
80
100
CI FA
R1 0-
CI FA
R1 00
Te st
Ac cu
ra cy
= 0.1
0 10k 20k 30k 40k 50k Budget
= 0.5
0 10k 20k 30k 40k 50k Budget
= 0.99
0 10k 20k 30k 40k 50k Budget
= 1.1
Figure A6: Budget vs. Test Accuracy. Supplements Fig. 3c in the main paper.
0 25 50 75 100 Acc(Attacker) ↓
0
20
40
60
80
100
Ac c(
De fe
nd er
) ↑
MNIST-EMNISTLetters
0 25 50 75 100 Acc(Attacker) ↓
0
20
40
60
80
100
Ac c(
De fe
nd er
) ↑
FashionMNIST-EMNIST
0 25 50 75 100 Acc(Attacker) ↓
0
20
40
60
80
100
Ac c(
De fe
nd er
) ↑
CIFAR10-CIFAR100
MAD MAD-argmax random-noise reverse-sigmoid ideal
Figure A7: Attacker argmax. Supplements Fig. 4 in the main paper.
0 10k 20k 30k 40k Iterations N
0.0
0.02
0.04
0.06
0.08
0.1
0.12
te st-
los s(
at ta
ck er
)
MNIST-EMNISTLetters
0 10k 20k 30k 40k Iterations N
0.01
0.02
0.03
0.04
0.05
0.06
0.07
te st-
los s(
at ta
ck er
)
FashionMNIST-EMNIST
0 20k 40k Iterations N
0.0
0.02
0.04
0.06
0.08
0.1 0.12 te stlos s( at ta ck er )
CIFAR10-CIFAR100
‘ = 0.01 ‘ = 0.1 ‘ = 0.5 ‘ = 1.0 ‘ = 2.0
Black-box setting
0
20
40
60 80100 120
140
160
180 1 10 100 1k 10k 100k 1m
76.5
37.8
17.1
2.9 0.3
0.01 0.1 0.5 1.0 2.0
0
20
40
60 80100 120
140
160
180 1 10 100 1k 10k 100k 1m
81.0
34.5
17.5
3.6 0.4
0.01 0.1 0.5 1.0 2.0
0
20
40
60 80100 120
140
160
180 1 10 100 1k 10k 100k 1m
91.3
37.0
18.8
3.5 0.3
0.01 0.1 0.5 1.0 2.0
0 20 40 Epoch
10≠3
10≠2
10≠1
100
te st-
los s(
at ta
ck er
)
MNIST
0 20 40 Epoch
10≠1
te st-
los s(
at ta
ck er
)
FashionMNIST
0 2 40 Epoch
10≠2
10≠1
100
te st-
los s(
at ta
ck er
)
CIFAR10
‘ = 0.01 ‘ = 0.1 ‘ = 0.5 ‘ = 1.0 ‘ = 2.0
MNIST FashionMNIST CIFAR10
Figure A8: Histogram of Angular Deviations (Black-box setting). Supplements Fig. 6 in the main paper. The test-loss during of the attacker model for each of the histograms (over multiple values) are provided in the bottom row.
F.3 ATTACKER ARGMAX
In Figure A7, we perform the non-replicability vs. utility evaluation (complementing Fig. 5 in the main paper) under a special situation: the attacker discards the probabilities and only uses the top-1 ‘argmax’ label to train the stolen model. Relevant discussion can be found in Section 5.2.2.
F.4 BLACK-BOX ANGULAR DEVIATIONS
In Figure A8, we provide the angular deviations obtained in a black-box setting over the course of training the attack model. We train the attacker model using the transfer set obtained by the knockoff approach (the strongest attacker in our experiments) for 50 epochs using a SGD (lr = 0.01, momentum = 0.5) and a batch size of 64. The experiment compliments our previous discussion in Section 5.2.3 of the main paper under “How much angular deviation does MAD introduce?”. As before, we estimate the angular deviations as: θ = arccos u·a||u||||a|| , where u = ∇wL(wt,y, ·) and a = ∇wL(wt, ỹ, ·). We observe from Figure A8: (i) the defensive angular deviations introduced by MAD to posterior predictions transfer to a black-box attacker setting, when crafting perturbations without access to the adversary’s model parameters; and (ii) although the setting introduces lower angular deviations at the extreme case of =2.0 (e.g., 114.7◦ → 76.5◦ in CIFAR10), we observe the perturbation sufficient to maximize the attacker’s test loss. We find significant angular deviations introduced by our approach in a black-box setting as well.
F.5 MAD ABLATION EXPERIMENTS
We present the ablation experiments covering all defender models in Figure A9. Relevant discussion is available in Section 5.2.3 of the main paper under “Ablative Analysis”.
40 60 80 100 Acc(Attacker) ↓
0.0
0.2
0.5
0.8
1.0
1.2
||y − ỹ || 1 ↓
MNIST-EMNISTLetters · B =50K
20 40 60 Acc(Attacker) ↓
0.0
0.2
0.5
0.8
1.0
1.2
||y − ỹ || 1 ↓
FashionMNIST-EMNIST · B =50K
40 60 80 Acc(Attacker) ↓
0.0
0.2
0.5
0.8
1.0
1.2 1.5 ||y − ỹ || 1 ↓
CIFAR10-CIFAR100 · B =50K
MAD MAD-argmax MAD-relax G = I y∗=rand ideal
0 25 50 75 100 Acc(Attacker) ↓
0
20
40
60
80
100
Ac c(
De fe
nd er
) ↑
0 25 50 75 100 Acc(Attacker) ↓
0
20
40
60
80
100
Ac c(
De fe
nd er
) ↑
0 25 50 75 100 Acc(Attacker) ↓
0
20
40
60
80
100
Ac c(
De fe
nd er
) ↑
Figure A9: MAD ablation experiments. Supplements Fig. 8 in the main paper. | 1. What is the purpose of the paper regarding defense against model stealing attacks?
2. How does the proposed approach aim to maintain accuracy while maximizing misleading gradient deviation?
3. Is there any concern regarding the theoretical novelty of the method?
4. Can the proposed approach be applied effectively in adversarial settings?
5. Are there any limitations or potential improvements for the proposed method? | Review | Review
This paper aims at defending against model stealing attacks by perturbing the posterior prediction of a protected DNN with a balanced goal of maintaining accuracy and maximizing misleading gradient deviation. The maximizing angular deviation formulation makes sense and seemingly correct. The heuristic solver toward this objective is shown to be relatively effective in the experiments. While the theoretical novelty of the method is limited, the application in adversarial settings may be useful to advance of this research field, especially when it is relatively easy to apply by practitioners.I recommend toward acceptance of this paper even though can be convinced otherwise by better field experts. |
ICLR | Title
Memorization-Dilation: Modeling Neural Collapse Under Noise
Abstract
The notion of neural collapse refers to several emergent phenomena that have been empirically observed across various canonical classification problems. During the terminal phase of training a deep neural network, the feature embedding of all examples of the same class tend to collapse to a single representation, and the features of different classes tend to separate as much as possible. Neural collapse is often studied through a simplified model, called the layer-peeled model, in which the network is assumed to have “infinite expressivity” and can map each data point to any arbitrary representation. In this work we study a more realistic variant of the layer-peeled model, which takes the positivity of the features into account. Furthermore, we extend this model to also incorporate the limited expressivity of the network. Empirical evidence suggests that the memorization of noisy data points leads to a degradation (dilation) of the neural collapse. Using a model of the memorization-dilation (M-D) phenomenon, we show one mechanism by which different losses lead to different performances of the trained network on noisy data. Our proofs reveal why label smoothing, a modification of cross-entropy empirically observed to produce a regularization effect, leads to improved generalization in classification tasks.
N/A
The notion of neural collapse refers to several emergent phenomena that have been empirically observed across various canonical classification problems. During the terminal phase of training a deep neural network, the feature embedding of all examples of the same class tend to collapse to a single representation, and the features of different classes tend to separate as much as possible. Neural collapse is often studied through a simplified model, called the layer-peeled model, in which the network is assumed to have “infinite expressivity” and can map each data point to any arbitrary representation. In this work we study a more realistic variant of the layer-peeled model, which takes the positivity of the features into account. Furthermore, we extend this model to also incorporate the limited expressivity of the network. Empirical evidence suggests that the memorization of noisy data points leads to a degradation (dilation) of the neural collapse. Using a model of the memorization-dilation (M-D) phenomenon, we show one mechanism by which different losses lead to different performances of the trained network on noisy data. Our proofs reveal why label smoothing, a modification of cross-entropy empirically observed to produce a regularization effect, leads to improved generalization in classification tasks.
1 INTRODUCTION
The empirical success of deep neural networks has accelerated the introduction of new learning algorithms and triggered new applications, with a pace that makes it hard to keep up with profound theoretical foundations and insightful explanations. As one of the few yet particularly appealing theoretical characterizations of overparameterized models trained for canonical classification tasks, Neural Collapse (NC) provides a mathematically elegant formalization of learned feature representations Papyan et al. (2020). To explain NC, consider the following setting. Suppose we are given a balanced dataset D ={ (x (k) n , yn) } k∈[K],n∈[N ] ⊂ X × Y in the instance space X = Rd and label space Y = [N ] := {1, . . . , N}, i.e. each class n ∈ [N ] has exactly K samples x(1)n , . . . ,x(K)n . We consider network architectures commonly used in classification tasks that are composed of a feature engineering part g : X → RM (which maps an input signal x ∈ X to its feature representation g(x) ∈ RM ) and a linear classifier W (·) + b given by a weight matrix W ∈ RN×M as well as a bias vector b ∈ RN . Let wn denote the row vector of W associated with class n ∈ [N ]. During training, both classifier components are simultaneously optimized by minimizing the cross-entropy loss.
*These authors contributed equally to this work.
Denoting the feature representations g(x(k)n ) of the sample x (k) n by h (k) n , the class means and the global mean of the features by
hn := 1
K K∑ i=1 h(k)n , h := 1 N N∑ n=1 hn,
NC consists of the following interconnected phenomena (where the limits take place as training progresses):
(NC1) Variability collapse. For each class n ∈ [N ], we have 1K ∑K k=1 ∥∥∥h(k)n − hn∥∥∥2 → 0 . (NC2) Convergence to simplex equiangular tight frame (ETF) structure. For any m,n ∈ [N ]
with m ̸= n, we have
∥hn − h∥2 − ∥hm − h∥2 → 0, and〈 hn − h
∥hn − h∥2 , hm − h ∥hm − h∥2
〉 → − 1
N − 1 .
(NC3) Convergence to self-duality. For any n ∈ [N ], it holds
hn − h ∥hn − h∥2 − wn ∥wn∥2 → 0 .
(NC4) Simplification to nearest class center behavior. For any feature representation u ∈ RM , it holds
argmax n∈[N ] ⟨wn,u⟩+ bn → argmin n∈[N ] ∥u− hn∥2 .
In this paper, we consider a well known simplified model, in which the features h(k)n are not parameterized by the feature engineering network g but are rather free variables. This model is often referred to as layer-peeled model or unconstrained features model, see e.g. Lu & Steinerberger (2020); Fang et al. (2021); Zhu et al. (2021). However, as opposed to those contributions, in which the features h(k)n can take any value in RM , we consider here the case h(k)n ≥ 0 (understood componentwise). This is motivated by the fact that features are typically the outcome of some non-negative activation function, like the Rectified Linear Unit (ReLU) or sigmoid. Moreover, by incorporating the limited expressivity of the network to the layer-peeled model, we propose a new model, called memorization-dilation (MD). Given such model assumptions, we formally prove advantageous effects of the so-called label smoothing (LS) technique Szegedy et al. (2015) (training with a modification of cross-entropy (CE) loss), in terms of generalization performance. This is further confirmed empirically.
2 RELATED WORK
Studying the nature of neural network optimization is challenging. In the past, a plethora of theoretical models has been proposed to do so Sun (2020). These range from analyzing simple linear Kunin et al. (2019); Zhu et al. (2020); Laurent & von Brecht (2018) to non-linear deep neural networks Saxe et al. (2014); Yun et al. (2018). As one prominent framework among others, Neural Tangent Kernels Jacot et al. (2018); Roberts et al. (2021), where neural networks are considered as linear models on top of randomized features, have been broadly leveraged for studying deep neural networks and their learning properties.
Many of the theoretical properties of deep neural networks in the regime of overparameterization are still unexplained. Nevertheless, certain peculiarities have emerged recently. Among those, socalled “benign overfitting” Bartlett et al. (2019); Li et al. (2021), where deep models are capable of perfectly fitting potentially noisy data by retaining accurate predictions, has recently attracted attention. Memorization has been identified as one significant factor contributing to this effect Arpit et al. (2017); Sanyal et al. (2021), which also relates to our studies. Not less interesting, the learning risk of highly-overparameterized models shows a double-descent behavior when varying the model
complexity Nakkiran et al. (2020) as yet another phenomenon. Lastly, the concept of NC Papyan et al. (2020) has recently shed light on symmetries in learned representations of overparameterized models.
After laying the foundation of a rigorous mathematical characterization of the NC phenomenon by Papyan et al. (2020), several follow-up works have broadened the picture. As the former proceeds from studying CE loss, the collapsing behavior has been investigated for alternative loss functions. For instance, squared losses have shown similar collapsing characteristics Poggio & Liao (2020; 2021), and have paved the way for more opportunities in its mathematical analysis, e.g., by an NC-interpretable decomposition Han et al. (2021). More recently, Kornblith et al. (2021) provide an exhaustive overview over several commonly used loss functions for training deep neural networks regarding their feature collapses.
Besides varying the loss function, different theoretical models have been proposed to analyze NC. Most prominently, unconstrained feature models have been considered, which characterize the penultimate layer activations as free optimization variables Mixon et al. (2020); Lu & Steinerberger (2020); E & Wojtowytsch (2021). This stems from the assumption that highly overparameterized models can approximate any patterns in the feature space. While unconstrained features models typically only look at the last feature encoder layer, layer-peeling allows for “white-boxing” further layers before the last one for a more comprehensive theoretical analysis Fang et al. (2021). Indeed, this approach has been applied in Tirer & Bruna (2022), which namely extends the unconstrained features model by one layer as well as the ReLU nonlinearity. On the other hand, Zhu et al. (2021), Ji et al. (2021) and Zhou et al. (2022a) extend the unconstrained features model analysis by studying the landscape of the loss function therein and the related training dynamics. Beyond unconstrained features models, Ergen & Pilanci (2021) introduce a convex analytical framework to characterize the encoder layers for a more profound understanding of the NC phenomenon. Referring to the implications of NC on our understanding of neural networks, Hui et al. (2022) and Galanti et al. (2021) discuss the impact of NC on test data in the sense of generalization and transfer learning. Finally, Kothapalli et al. (2022) provides a multifaceted survey of recent works related to NC.
3 LAYER-PEELED MODEL WITH POSITIVE FEATURES
As a prerequisite to the MD model, in this section we introduce a slightly modified version of the layer-peeled (or unconstrained features) model (see e.g. Zhu et al. (2021); Fang et al. (2021)), in which the features have to be positive. Accordingly, we will show that the global minimizers of the modified layer-peeled model correspond to an NC configuration, which differs from the global minimizers specified in other works and captures more closely the NC phenomenon in practice.
For conciseness, we denote by H the matrix formed by the features h(k)n , n ∈ [N ], k ∈ [K] as columns, and define ∥W ∥ and ∥H∥ to be the Frobenius norm of the respective matrices, i.e. ∥W ∥2 = ∑N n=1 ∥wn∥ 2 and ∥H∥2 = ∑K k=1 ∑N n=1 ∥∥∥h(k)n ∥∥∥2. We consider the regularized version of the model (instead of the norm constraint one as in e.g. Fang et al. (2021)) 1
min W ,H Lα(W ,H) := Lα(W ,H) + λW ∥W ∥2 + λH K ∥H∥2
s.t. H ≥ 0, (Pα)
where λW , λH > 0 are the penalty parameters for the weight decays. By Lα we denote empirical risk with respect to the LS loss with parameter α ∈ [0, 1), where α = 0 corresponds to the conventional CE loss. More precisely, given a value of α, the LS technique then defines the label assigned to class n ∈ [N ] as the following probability vector:
y(α)n = (1− α)en + α
n 1N ∈ [0, 1]N ,
where en ∈ RN denotes the n-th standard basis vector and 1N ∈ RN denotes the vector consisting of only ones. Let p : RM → RN be the function that assigns to each feature representation z ∈ RM
1Note that for simplicity we assume that the last layer does not have bias terms, i.e. b = 0. The result can be however easily extended to the more general case when the biases do not vanish. Namely, in presence of bias terms, the statement of Theorem 3.2 and also its proof remain unchanged.
the probability scores of the classes (as a probability vector in RN ),
pW (z) := softmax(Wz) := [ e⟨wm,z⟩∑N i=1 e ⟨wi,z⟩ ]N m=1 ∈ [0, 1]N .
Then the LS loss corresponding to a sample in class n ∈ [N ] is given by
ℓα(W , z,y (α) n ) := 〈 −y(α)n , log pW (z) 〉 := N∑ m=1 −y(α)nm log ( pW (z)m ) (1)
and the LS empirical risk Lα is defined as
Lα(W ,H) = 1
NK K∑ k=1 N∑ n=1 ℓα ( W ,h(k)n ,y (α) n ) .
We will show that in common settings, the minimizers of (Pα ) correspond to neural collapse (NC) configurations, which we formalize in Def. 3.1 below. Definition 3.1 (NC configurations). Let K,M,N ∈ N, M ≥ N . A pair (W ,H) of a weight matrix formed by rows wn ∈ RM and a feature matrix formed by columns h(k)n ∈ RM+ (with n ∈ [N ], k ∈ [K]) is said to be a NC configuration if
(i) The feature representations h(k)n within every class n ∈ [N ] are equal for all k ∈ [K], and thus equal to their class mean hn := 1K ∑K k=1 h (k) n .
(ii) The class means {hn}Nn=1 have equal norms and form an (entry-wise) non-negative orthogonal system.
(iii) Let P h⊥ be the projection upon the subspace of RM orthogonal to h = 1N ∑N
n=1 hn. Then for every n ∈ [N ], it holds wn = CPh⊥hn for some constant C independent of n.
Our main theorem in this section can be represented as follows. Theorem 3.2. Let M ≥ N , α ∈ [0, 1). Assume that N−1N α + 2 √ (N − 1)λWλH < 1. Then any global minimizer of the problem (Pα) is a NC configuration.
Note that the NC configurations defined in Definition 3.1 above differ significantly from the ones specified in other works, e.g. Fang et al. (2021); Zhu et al. (2021); Zhou et al. (2022b) or Tirer & Bruna (2022), see Appendix B.1 for more discussion.
4 THE MEMORIZATION-DILATION MODEL
4.1 EXPERIMENTAL MOTIVATION
Previous studies of the NC phenomenon mainly focus on the collapsing variability of training activations, and make rather cautious statements about its effects on generalization. For instance, Papyan et al. (2020) report slightly improved test accuracies for training beyond zero training error. Going a step further, Zhu et al. (2021) show that the NC phenomenon also happens for overparameterized models when labels are completely randomized. Here, the models seem to memorize by overfitting the data points, however, a rigorous study how label corruption affects generalization in the regime of NC is still lacking.
To fill the gap, we advocate to analyze the effects of label corruption in the training data on the (previously unseen) test instead of the training feature collapse. Eventually, tight test class clusters go hand in hand with easier separation of the instances and, thus, a smaller generalization error. Following Zhu et al. (2021), we measure the collapse of the penultimate layer activations by the NC1 metric. This metric depicts the relative magnitude of the within-class covariance ΣW with respect to the between-class covariance ΣB of the penultimate layer features and is defined as
NC1 := 1
N trace(ΣWΣ
† B), (2)
where
ΣW := 1
NK N∑ n=1 K∑ k=1 (h(k)n − hn)(h(k)n − hn)⊤ ∈ RM×M ,
ΣB := 1
N N∑ n=1 (hn − h)(hn − h)⊤ ∈ RM×M ,
and Σ†B denotes the pseudo-inverse of ΣB . Here, we adopt the notations from Section 1: h (k) n ∈ RM denotes the feature representation of k-th sample in class n, hn the class mean and h the global mean. Moreover, we distinguish NC train1 and NC test 1 to be calculated on the training and test instances, respectively. We call NC test1 dilation. Let us now turn to the notion of memorization, which is not uniquely defined in deep learning literature. Here, we define memorization in the context of the NC setting and in a global manner, different from other works, e.g. Feldman & Zhang (2020). Formally, suppose that label noise is incorporated by (independently) corrupting the instance of each class label n in the training data with probability η ∈ (0, 1), where corruption means drawing a label uniformly at random from the label space Y . We denote the set of corrupted instances by [K̃]. For a given dataset D (with label noise η), we define memorization as
mem := N∑
n=1 ∑ k∈[K̃] ∥h(k)n − h∗n∥2 , (3)
where h∗n denotes the mean of (unseen) test instances belonging to class n.
We call the original ground truth label of a sample its true label. We call the label after corruption, which may be the true label or not, the observed label. Since instances of the same true label tend to have similar input features in some sense, the network is biased to map them to similar feature representations. Instances are corrupted randomly, and hence, instances of the same true label but different observed labels do not have predictable characteristics that allow the network to separate them in a way that can be generalized. When the network nevertheless succeeds in separating such instances, we say that the network memorized the feature representations of the corrupted instances in the training set. The metric mem in (3) thus measures memorization. The above memorization also affects dilation. Indeed, the network uses the feature engineering part to embed samples of similar features (that originally came from the same class), to far apart features, that encode different labels. Such process degrades the ability of the network to embed samples consistently, and leads to dilation.
To quantify the interaction between mem and NC test1 , we analyzed the learned representations h in the penultimate layer feature space for different noise configurations. One may wonder whether one can see a systematic trend in the test collapse given the memorization, and how this evolves over different loss functions.
To this end, we trained simple multi-layer neural networks for two classes (N = 2), which we subsampled from the image classification datasets MNIST LeCun et al. (1998), FashionMNIST Xiao et al. (2017), CIFAR-10 Krizhevsky & Hinton (2009) and SVHN Netzer et al. (2011). The labels are corrupted with noise degrees η ∈ [0.025, 0.4]. The network consists of 9 hidden layers with 2048 neurons each, thus, it represents a vastly overparameterized model. The feature dimension M is set to the number of classes N . We trained these networks using the CE and LS loss with a smoothing factor α = 0.1, as well as the mean-squared error (MSE). Moreover, we consider label relaxation (LR) Lienen & Hüllermeier (2021) as a generalization to LS with a relaxation degree α = 0.1. The networks were trained until convergence in 200 epochs (where the last 50 epochs did not make any significant changes) using SGD with an initial learning rate of 0.1 multiplied by 0.1 each 40 epochs and a small weight decay of 0.001. Moreover, we considered ReLU as activation function throughout the network, as well as batch normalization in each hidden layer. A linear softmax classifier is composed on the encoder. We conducted each experiment ten times with different seeds.
The results for the above experimental setting are shown in Fig. 1, in which one can observe the trends of √ NC test1 per memorization for various configurations. As can be seen, the figure shows an
approximately linear correspondence between √
NC test1 and mem for the CE derivatives (CE and LS) on all datasets when mem is not large.
0.0 0.5 1.0 1.5 2.0 0.1
0.2
0.3
0.4
0.5
0.6
mnist
CE LS LR MSE
0.0 0.5 1.0 1.5 2.0
0.2
0.3
0.4
0.5
fashionmnist
CE LS LR MSE
0.0 0.5 1.0 1.5 0.45
0.50
0.55
0.60
0.65
0.70
cifar10
CE LS LR MSE
0.0 0.5 1.0 1.5 2.0
0.3
0.4
0.5
0.6
svhn
CE LS LR MSE
0.05
0.10
0.15
0.20
0.25
0.30
0.35
No ise
D eg
re e
Test Collapse per Memorization (N = 2)
(NK) 1 n [N]k [K] |h(k)n h *n |2 (Memorization)
te st 1 (D
ila tio
n)
0.0 0.5 1.0 1.5 2.0
85
90
95
100 mnist
CE LS LR MSE
0.0 0.5 1.0 1.5 2.0 88
90
92
94
96
98
fashionmnist
CE LS LR MSE
0.0 0.5 1.0 1.5
84
86
88
90
cifar10
CE LS LR MSE
0.0 0.5 1.0 1.5 2.0
86
88
90
92
94
96 svhn
CE LS LR MSE
0.05
0.10
0.15
0.20
0.25
0.30
0.35
No ise
D eg
re e
Test Accuracy per Memorization (N = 2)
(NK) 1 n [N]k [K] |h(k)n h *n |2 (Memorization)
Te st
A cc
ur ac
y
Figure 1: Feature collapse of the test instances in terms of
√
NC test1 per memorization (top row) and the resulting
test accuracies (bottom row) averaged over ten random seeds. Comparing the markers of the same color, it can be observed that LS consistently performs better than CE across all datasets, with very few exceptions (the very low noise degrees in cifar10).
Moreover, as CE and LS share the same slope, these results suggest that the degradation of the test collapse (aka dilation) is a function of memorization and the network expressitivity, and not of the choice of the loss. The loss only affects how the noise translates to memorization, but not how memorization translates to dilation. Even though the same amount of noise is mapped to different memorization values in CE and LS, the memorization-dilation curve is nevertheless shared between CE and LS. Hence, since LS leads the network to memorize less, it results in improved performance (cf. Fig. 1). We can further see that MSE and LR show a different memorization-dilation correspondence, which means that these losses affect the inductive bias in a different way than CE and LS.
We repeated the experiments for different values of the feature dimension M and show the example results in Fig. 2. Here, one can see the similar trends of dilation per memorization as before. In the appendix, we provide additional results showing the behavior in the multi-class case N > 2 with different models for label noise. The results support our MD model, and show that the memorizationdilation curve is roughly independent of the noise model for low-to-mid noise levels.
4.2 THE MEMORIZATION-DILATION MODEL
Motivated by the observations of the previous experiments, we propose the so-called memorizationdilation (MD) model, which extends the unconstrained feature model by incorporating the interaction between memorization and dilation as a model assumption. By this, we explicitly capture the limited expressivity of the network, thereby modeling the inductive bias of the underlying model.
This model shall provide a basis to mathematically characterize the difference in the learning behavior of CE and LS. More specifically, we would like to know why LS shows improved generalization performance over conventional CE, as was observed in past works Müller et al. (2019). The main idea can be explained as follows. We first note that dilation is directly linked to generalization (see also Kornblith et al. (2021)), since the more concentrated the feature representations of each class are, the easier it is to separate the different classes with a linear classifier without having outliers crossing the decision boundary. The MD model asserts that dilation is a linear function of memorization. Hence, the only way that LS can lead to less dilation than CE, is if LS memorizes less than CE. Hence, the goal in our analysis is to show that, under the MD model, LS indeed leads to less memorization than CE. Note that this description is observed empirically in the experiments of Section 4.1.
Next we define the MD model in the binary classification setting. Definition 4.1. We call the following minimization problem MD. Minimize the MD risk
Rλ,η,α(U , r) := Fλ,α(W ,H, r) + ηGλ,α(W ,U , r), with respect to the noisy feature embedding U = [u1,u2] ∈ R2×M+ and the standard deviation r ≥ 0, under the constraints
η ∥h1 − u2∥ ≤ CMDr
∥h1 − h2∥ (4)
η ∥h2 − u1∥ ≤ CMDr
∥h1 − h2∥ . (5)
Here,
• H ∈ R2×M+ and W ∈ RM×2 form an NC configuration (see Definition 3.1).
• CMD > 0 is called the memorization-dilation slope, 0 ≤ α < 1 is called the LS parameter, η > 0 the noise level, and λ > 0 the regularization parameter.
• Fλ,α is the component in the (regularized) risk that is associated with the correctly labeled samples,
Fλ,α(W ,H, r) := ∫ ( ℓα ( W ,h1 + v,y (α) 1 ) + λ ∥h1 + v∥2 ) dµ1r(v)
+ ∫ ( ℓα ( W ,h2 + v,y (α) 2 ) + λ ∥h2 + v∥2 ) dµ2r(v)
where µ1r and µ 2 r are some probability distributions with mean 0 and standard deviation r, and lα is the LS loss defined in (1).
• Gλ,α is the component in the (regularized) risk that is associated with the corrupted samples, defined as
Gλ,α(W ,U , r) = ℓα ( W ,u1,y (α) 1 ) + ℓα ( W ,u2,y (α) 2 ) + λ ∥u1∥2 + λ ∥u2∥2 .
The MD model can be interpreted as follows. First we consider the feature representations of the correctly labeled samples in each class as samples from a distribution (namely µ1,2r in Def. 4.1) with standard deviation r, a parameter that measures the dilation of the class cluster. In a natural way, the corresponding risk Fλ,α involves the loss average over all samples, i.e. the loss integral over the distribution. For simplicity, we assume that the class centers h1,h2 as well as the weight matrix W are fixed as described by the NC configuration. This is a reasonable simplification as it has been always observed in the experiments.
On the other hand, the feature representations of corrupted samples are u1 and u2.2 The amount of memorization in the first class is defined to be η∥h2 − u1∥, since the more noise η there is, the more
2Certainly one can, instead of two single points u1 and u2, two distributions centered around u1 and u2, similarly as before for uncorrupted samples. However, it is quite straightforward to see that the minimization of the MD risk over the dilation of these two distributions is independent of other variables (not like r), and thus the minimum should be attained in the case of collapsing into two single points. Thus, for convenience we assume directly here that Gλ,α involves only two single points.
examples we need to memorize. The amount of memorization in the second class is defined the same way. The (normalized) dilation is defined to be r∥h1−h2∥ , which models a similar quantity to (2).
h
(k) 2 are test images correctly labeled as 1, with centroid h2. The centroid of the test images with correct label 0 is h1. The centroid of training images which were originally labeled as 1 but are mislabeled as 0 is u1. The memorization of u1 moves it close to h1, and causes dilation of the instances h
(k) 2 .
The constraints (4) and (5) tell us that in order to map noisy samples u1 away from h2, we have to pay with dilation r. The larger r is, the further away we can map u1 from h2. The correspondence between memorization and dilation is linear with slope CMD by assumption. There are two main forces in the optimization problem: u1 would like to be as close as possible to its optimal position h1, and similarly u2 likes to be close to h2. In view of the constraints (4) and (5), to achieve this, r has to be increased to rmax := η∥h1−h2∥2
CMD . On the other hand, the optimal r for the term Fλ,α is r = 0, namely, the layer-peeled NC configuration. An optimal solution hence balances between memorization and dilation. See Fig. 3 for a visualization of the MD model.
Our goal in this section is to compare the optimal value r in case of LS and CE losses. We will distinguish between these two cases by setting the value of α in the MD model to 0 for CE and to some α0 > 0 for LS. This will result in two different scales of the feature embeddings H , denoted by HCE and HLS for CE and LS loss respectively, with the ratio
γ := ∥∥HCE∥∥ / ∥∥HLS∥∥ > 1, (6)
which holds under the reasonable assumption that the LS technique is sufficiently effective, or more precisely α0 > 2 √ λWλH .
The main result in this section will be Theorem 4.3, which states informally that in the low noise regime, the optimal dilation in case of LS loss is smaller than that in case of CE loss. Before presenting this theorem, we will first establish several assumptions on the distributions µ1,2r and the noise η in Assumption 4.2. Basically we allow a rich class of distributions and only require certain symmetry and bounded supports in terms of r, as well as require η to be small in terms of the ratio γ.
Assumption 4.2.
1. Let α0 > 0. We assume that the solution of
min W ,H ℓα
( W ,h1,y (α) 1 ) + ℓα ( W ,h2,y (α) 2 ) + λW ∥W ∥2 + λH ∥H∥2
s.t. H ≥ 0 .
is given by (W ,H) = (WCE ,HCE) for α = 0 and (W ,H) = (WLS ,HLS) for α = α0.
2. Assume that the distributions µ1r and µ 2 r are centered, in the sense that∫ ⟨w2 −w1,v⟩ dµ1r(v) = ∫
⟨w1 −w2,v⟩ dµ2r(v) = 0 ,∫ ⟨h1,v⟩ dµ1r(v) = ∫ ⟨h2,v⟩ dµ2r(v) = 0 .
Furthermore, we assume that there exists a constant A > 0 such that ∥v∥ ≤ Ar for any vector v that lies in the support of µ1r or in the support of µ 2 r .
3. Assume that the noise level η and the LS parameter α0 satisfy the following. We suppose α0 > 4 √ λWλH , which guarantees γ := ∥∥HCE∥∥ / ∥∥HLS∥∥ > 1. We moreover suppose that η is sufficiently small to guarantee η1/2 < C̃(1− 1γ ) where C̃ :=
CMD√ 2∥hCE1 −hCE2 ∥ .
Now our main result in this section can be formally stated as below.
Theorem 4.3. Suppose that Assumption 4.2 holds true for M ≥ N = 2 and λ := λH . Let rCE∗ and rLS∗ be the optimal dilations, i.e. the optimum r in the MD problem, corresponding to the CE and LS loss (accordingly α = 0 and α = α0), respectively. Then it holds that
rCE∗∥∥hCE1 − hCE2 ∥∥ > r LS ∗∥∥hLS1 − hLS2 ∥∥ .
Theorem 4.3 reveals a mechanism by which LS achieves better generalization than CE. It is proven that LS memorizes and dilates less than CE, which is associated with better generalization. Note that in practice, the data often have noise in the sense that not all examples are perfectly labeled. More importantly, examples from different classes may share many similarities, a situation that is also covered by the MD model: the feature representations of samples from those classes are biased toward each other. In this case, LS also leads to decreased dilation which corresponds to better class separation and higher performance Kornblith et al. (2021).
Interestingly, the concurrent work Zhou et al. (2022b) has shown that in the noiseless setting CE and LS lead to largely identical test accuracy, which seems to contradict the statement that LS performs better claimed by our work as well as many others, e.g. Kornblith et al. (2021); Müller et al. (2019). However, note that Zhou et al. (2022b) requires the network to be sufficiently large so that it has enough expressive power to fit the underlying mapping from input data to targets, as well as to be trained until convergence. While the latter is easy to obtain, it is difficult even to check if the first requirement holds. The difference between the two results is hence possibly caused by the effect of noise and by the network expressivity: while we aim to model the limited expressivity by the MD relation, Zhou et al. (2022b) focuses on networks with approximately infinite expressivity.
The MD model combines a statistical term Fλ,α, that describes the risk over the distribution of feature embeddings of samples with clean labels, and an empirical term ηGλ,α that describes the risk over training samples with noisy labels. One point of view that can motivate such a hybrid statistical-empirical definition is the assumption that the network only memorizes samples of noisy labels, but not samples of clean labels. Such a memorization degrades (dilates) both the collapse of the training and test samples, possibly with different memorization-dilation slopes. However, memorization is not limited to corrupted labels, but can also apply to samples of clean labels Feldman & Zhang (2020), by which the learner can partially negate the dilation of the training features (but not test features). The fact that our model does not take the memorization of clean samples into account is one of its limitations. We believe that future work should focus on modeling the total memorization of all examples. Nevertheless, we believe that our current MD model has merit, since 1) noisy labels are memorized more than clean labels, and especially in the low noise regime the assumption of observing memorization merely for corrupted labels appears reasonable, and 2) our approach and proof techniques can be the basis of more elaborate future MD models.
5 CONCLUSION
In this paper, we first characterized the global minimizers of the Layer-Peeled Model (or the Unconstrained Features Model) with the positivity condition on the feature representations. Our characterization shows some distinctions from the results that haven been obtained in recent works for the same model without feature positivity. Besides the conventional cross-entropy (CE) loss, we studied the model in case of the label smoothing (LS) loss, showing that NC also occurs when applying this technique.
Then we extended the model to the so-called Memorization-Dilation (MD) Model by incorporating the limited expressivity of the network. Using the MD model, which is supported by our experimental observations, we show that when trained with the LS loss, the network memorizes less than when trained by the CE loss. This poses one explanation to the improved generalization performance of the LS technique over the conventional CE loss.
Our model has limitations, however, namely that it is limited to the case of two classes. Motivated by promising results on the applicability of our model to the multi-class setting, we believe that future work should focus on extending the MD model in this respect. With such extensions, memorizationdilation analysis has the potential to underlie a systematic comparison of the generalization capabilities of different losses, such as CE, LS, and label relaxation, by analytically deriving formulas for the amount of memorization associated with each loss.
ACKNOWLEDGMENTS
This work was partially supported by the German Research Foundation (DFG) within the Collaborative Research Center “On-The-Fly Computing” (CRC 901 project no. 160364472). Moreover, the authors gratefully acknowledge the funding of this project by computing time provided by the Paderborn Center for Parallel Computing (PC2). | 1. What is the focus of the paper regarding neural collapse phenomena?
2. What are the strengths of the proposed approach, particularly in comparison to other works?
3. Do you have any concerns or questions about the paper's claims and experiments?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
5. Are there any suggestions for future research or improvements to the proposed method? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
The authors study the phenomenon of neural collapse (NC) under several variants of the layer-peeled model. Since features from modern networks are the outcome of some non-negative activation functions, such as ReLU, the paper first considers the case of non-negative features and shows that label smoothing also produces NC solutions in this case. The authors then propose a new memorization-dilation model on test data with randomly label-corrupted training samples, which shows a linear relation between NC degree (dilation) and overall distance between test samples and their corresponding class-mean (memorization). Finally, in the label-corrupted setting, they formally prove the advantage of label smoothing over cross-entropy in binary classification tasks, and show some supporting experiments.
Strengths And Weaknesses
Strength:
This paper formally shows NC solutions for label smoothing and cross-entropy (CE) under nonnegative features. Nonnegative features are only considered for MSE loss in the previous work.
An interesting label-corrupted experiment is proposed, and the linear relation between dilation and memorization when corrupted levels are not large is very interesting.
The proposed memorization-dilation model may provide further insight into the connection between neural collapse and generalization.
Weakness:
As the nonnegative features model has already been studied in Tirer & Bruna (2022), though for the MSE loss, the authors may highlight the technical challenges in extending the results to label smoothing.
This paper shows that on training data with label noise, label smoothing could be more robust to label noise and achieve better performance on the original testing data. Based on this, the paper claims in many places that label smoothing leads to improved generalization in classification tasks. How does this better performance under label noise translate to better generalization in the standard case without label noise?
Following the above point, the recent work [A] shows that label smoothing and CE indeed produce similar performance when the network is sufficiently large and trained sufficiently long in the standard way without label noise. Since label smoothing and CE produce similar NC features under the unconstrained feature models, is the better performance of label smoothing on label noise because it has a different convergence speed than CE? It will be of interest to perform experiments with more iterations and see whether the results are the same.
It is interesting to note that MSE has better testing performance in Figure 1. Could the authors provide some comments on this?
Does the memorization-dilation model only consider one mislabeled training sample per class?
Could the memorization-dilation model be extended to the multi-class case?
[A] Zhou et al., Are All Losses Created Equal: A Neural Collapse Perspective; arXiv preprint arXiv:2210.02192; 2022.
Clarity, Quality, Novelty And Reproducibility
Overall, the paper is well-organized and well-written. The presentation of section 4.2, particularly Denifition 4.1, could be improved. For example,
u
1
and
u
2
could be introduced right before Definition 4.1. The results on nonnegative features extend previous work on MSE to label smoothing. The memorization-dilation model is new and could provide further insight into the connection between neural collapse and generalization. |
ICLR | Title
Memorization-Dilation: Modeling Neural Collapse Under Noise
Abstract
The notion of neural collapse refers to several emergent phenomena that have been empirically observed across various canonical classification problems. During the terminal phase of training a deep neural network, the feature embedding of all examples of the same class tend to collapse to a single representation, and the features of different classes tend to separate as much as possible. Neural collapse is often studied through a simplified model, called the layer-peeled model, in which the network is assumed to have “infinite expressivity” and can map each data point to any arbitrary representation. In this work we study a more realistic variant of the layer-peeled model, which takes the positivity of the features into account. Furthermore, we extend this model to also incorporate the limited expressivity of the network. Empirical evidence suggests that the memorization of noisy data points leads to a degradation (dilation) of the neural collapse. Using a model of the memorization-dilation (M-D) phenomenon, we show one mechanism by which different losses lead to different performances of the trained network on noisy data. Our proofs reveal why label smoothing, a modification of cross-entropy empirically observed to produce a regularization effect, leads to improved generalization in classification tasks.
N/A
The notion of neural collapse refers to several emergent phenomena that have been empirically observed across various canonical classification problems. During the terminal phase of training a deep neural network, the feature embedding of all examples of the same class tend to collapse to a single representation, and the features of different classes tend to separate as much as possible. Neural collapse is often studied through a simplified model, called the layer-peeled model, in which the network is assumed to have “infinite expressivity” and can map each data point to any arbitrary representation. In this work we study a more realistic variant of the layer-peeled model, which takes the positivity of the features into account. Furthermore, we extend this model to also incorporate the limited expressivity of the network. Empirical evidence suggests that the memorization of noisy data points leads to a degradation (dilation) of the neural collapse. Using a model of the memorization-dilation (M-D) phenomenon, we show one mechanism by which different losses lead to different performances of the trained network on noisy data. Our proofs reveal why label smoothing, a modification of cross-entropy empirically observed to produce a regularization effect, leads to improved generalization in classification tasks.
1 INTRODUCTION
The empirical success of deep neural networks has accelerated the introduction of new learning algorithms and triggered new applications, with a pace that makes it hard to keep up with profound theoretical foundations and insightful explanations. As one of the few yet particularly appealing theoretical characterizations of overparameterized models trained for canonical classification tasks, Neural Collapse (NC) provides a mathematically elegant formalization of learned feature representations Papyan et al. (2020). To explain NC, consider the following setting. Suppose we are given a balanced dataset D ={ (x (k) n , yn) } k∈[K],n∈[N ] ⊂ X × Y in the instance space X = Rd and label space Y = [N ] := {1, . . . , N}, i.e. each class n ∈ [N ] has exactly K samples x(1)n , . . . ,x(K)n . We consider network architectures commonly used in classification tasks that are composed of a feature engineering part g : X → RM (which maps an input signal x ∈ X to its feature representation g(x) ∈ RM ) and a linear classifier W (·) + b given by a weight matrix W ∈ RN×M as well as a bias vector b ∈ RN . Let wn denote the row vector of W associated with class n ∈ [N ]. During training, both classifier components are simultaneously optimized by minimizing the cross-entropy loss.
*These authors contributed equally to this work.
Denoting the feature representations g(x(k)n ) of the sample x (k) n by h (k) n , the class means and the global mean of the features by
hn := 1
K K∑ i=1 h(k)n , h := 1 N N∑ n=1 hn,
NC consists of the following interconnected phenomena (where the limits take place as training progresses):
(NC1) Variability collapse. For each class n ∈ [N ], we have 1K ∑K k=1 ∥∥∥h(k)n − hn∥∥∥2 → 0 . (NC2) Convergence to simplex equiangular tight frame (ETF) structure. For any m,n ∈ [N ]
with m ̸= n, we have
∥hn − h∥2 − ∥hm − h∥2 → 0, and〈 hn − h
∥hn − h∥2 , hm − h ∥hm − h∥2
〉 → − 1
N − 1 .
(NC3) Convergence to self-duality. For any n ∈ [N ], it holds
hn − h ∥hn − h∥2 − wn ∥wn∥2 → 0 .
(NC4) Simplification to nearest class center behavior. For any feature representation u ∈ RM , it holds
argmax n∈[N ] ⟨wn,u⟩+ bn → argmin n∈[N ] ∥u− hn∥2 .
In this paper, we consider a well known simplified model, in which the features h(k)n are not parameterized by the feature engineering network g but are rather free variables. This model is often referred to as layer-peeled model or unconstrained features model, see e.g. Lu & Steinerberger (2020); Fang et al. (2021); Zhu et al. (2021). However, as opposed to those contributions, in which the features h(k)n can take any value in RM , we consider here the case h(k)n ≥ 0 (understood componentwise). This is motivated by the fact that features are typically the outcome of some non-negative activation function, like the Rectified Linear Unit (ReLU) or sigmoid. Moreover, by incorporating the limited expressivity of the network to the layer-peeled model, we propose a new model, called memorization-dilation (MD). Given such model assumptions, we formally prove advantageous effects of the so-called label smoothing (LS) technique Szegedy et al. (2015) (training with a modification of cross-entropy (CE) loss), in terms of generalization performance. This is further confirmed empirically.
2 RELATED WORK
Studying the nature of neural network optimization is challenging. In the past, a plethora of theoretical models has been proposed to do so Sun (2020). These range from analyzing simple linear Kunin et al. (2019); Zhu et al. (2020); Laurent & von Brecht (2018) to non-linear deep neural networks Saxe et al. (2014); Yun et al. (2018). As one prominent framework among others, Neural Tangent Kernels Jacot et al. (2018); Roberts et al. (2021), where neural networks are considered as linear models on top of randomized features, have been broadly leveraged for studying deep neural networks and their learning properties.
Many of the theoretical properties of deep neural networks in the regime of overparameterization are still unexplained. Nevertheless, certain peculiarities have emerged recently. Among those, socalled “benign overfitting” Bartlett et al. (2019); Li et al. (2021), where deep models are capable of perfectly fitting potentially noisy data by retaining accurate predictions, has recently attracted attention. Memorization has been identified as one significant factor contributing to this effect Arpit et al. (2017); Sanyal et al. (2021), which also relates to our studies. Not less interesting, the learning risk of highly-overparameterized models shows a double-descent behavior when varying the model
complexity Nakkiran et al. (2020) as yet another phenomenon. Lastly, the concept of NC Papyan et al. (2020) has recently shed light on symmetries in learned representations of overparameterized models.
After laying the foundation of a rigorous mathematical characterization of the NC phenomenon by Papyan et al. (2020), several follow-up works have broadened the picture. As the former proceeds from studying CE loss, the collapsing behavior has been investigated for alternative loss functions. For instance, squared losses have shown similar collapsing characteristics Poggio & Liao (2020; 2021), and have paved the way for more opportunities in its mathematical analysis, e.g., by an NC-interpretable decomposition Han et al. (2021). More recently, Kornblith et al. (2021) provide an exhaustive overview over several commonly used loss functions for training deep neural networks regarding their feature collapses.
Besides varying the loss function, different theoretical models have been proposed to analyze NC. Most prominently, unconstrained feature models have been considered, which characterize the penultimate layer activations as free optimization variables Mixon et al. (2020); Lu & Steinerberger (2020); E & Wojtowytsch (2021). This stems from the assumption that highly overparameterized models can approximate any patterns in the feature space. While unconstrained features models typically only look at the last feature encoder layer, layer-peeling allows for “white-boxing” further layers before the last one for a more comprehensive theoretical analysis Fang et al. (2021). Indeed, this approach has been applied in Tirer & Bruna (2022), which namely extends the unconstrained features model by one layer as well as the ReLU nonlinearity. On the other hand, Zhu et al. (2021), Ji et al. (2021) and Zhou et al. (2022a) extend the unconstrained features model analysis by studying the landscape of the loss function therein and the related training dynamics. Beyond unconstrained features models, Ergen & Pilanci (2021) introduce a convex analytical framework to characterize the encoder layers for a more profound understanding of the NC phenomenon. Referring to the implications of NC on our understanding of neural networks, Hui et al. (2022) and Galanti et al. (2021) discuss the impact of NC on test data in the sense of generalization and transfer learning. Finally, Kothapalli et al. (2022) provides a multifaceted survey of recent works related to NC.
3 LAYER-PEELED MODEL WITH POSITIVE FEATURES
As a prerequisite to the MD model, in this section we introduce a slightly modified version of the layer-peeled (or unconstrained features) model (see e.g. Zhu et al. (2021); Fang et al. (2021)), in which the features have to be positive. Accordingly, we will show that the global minimizers of the modified layer-peeled model correspond to an NC configuration, which differs from the global minimizers specified in other works and captures more closely the NC phenomenon in practice.
For conciseness, we denote by H the matrix formed by the features h(k)n , n ∈ [N ], k ∈ [K] as columns, and define ∥W ∥ and ∥H∥ to be the Frobenius norm of the respective matrices, i.e. ∥W ∥2 = ∑N n=1 ∥wn∥ 2 and ∥H∥2 = ∑K k=1 ∑N n=1 ∥∥∥h(k)n ∥∥∥2. We consider the regularized version of the model (instead of the norm constraint one as in e.g. Fang et al. (2021)) 1
min W ,H Lα(W ,H) := Lα(W ,H) + λW ∥W ∥2 + λH K ∥H∥2
s.t. H ≥ 0, (Pα)
where λW , λH > 0 are the penalty parameters for the weight decays. By Lα we denote empirical risk with respect to the LS loss with parameter α ∈ [0, 1), where α = 0 corresponds to the conventional CE loss. More precisely, given a value of α, the LS technique then defines the label assigned to class n ∈ [N ] as the following probability vector:
y(α)n = (1− α)en + α
n 1N ∈ [0, 1]N ,
where en ∈ RN denotes the n-th standard basis vector and 1N ∈ RN denotes the vector consisting of only ones. Let p : RM → RN be the function that assigns to each feature representation z ∈ RM
1Note that for simplicity we assume that the last layer does not have bias terms, i.e. b = 0. The result can be however easily extended to the more general case when the biases do not vanish. Namely, in presence of bias terms, the statement of Theorem 3.2 and also its proof remain unchanged.
the probability scores of the classes (as a probability vector in RN ),
pW (z) := softmax(Wz) := [ e⟨wm,z⟩∑N i=1 e ⟨wi,z⟩ ]N m=1 ∈ [0, 1]N .
Then the LS loss corresponding to a sample in class n ∈ [N ] is given by
ℓα(W , z,y (α) n ) := 〈 −y(α)n , log pW (z) 〉 := N∑ m=1 −y(α)nm log ( pW (z)m ) (1)
and the LS empirical risk Lα is defined as
Lα(W ,H) = 1
NK K∑ k=1 N∑ n=1 ℓα ( W ,h(k)n ,y (α) n ) .
We will show that in common settings, the minimizers of (Pα ) correspond to neural collapse (NC) configurations, which we formalize in Def. 3.1 below. Definition 3.1 (NC configurations). Let K,M,N ∈ N, M ≥ N . A pair (W ,H) of a weight matrix formed by rows wn ∈ RM and a feature matrix formed by columns h(k)n ∈ RM+ (with n ∈ [N ], k ∈ [K]) is said to be a NC configuration if
(i) The feature representations h(k)n within every class n ∈ [N ] are equal for all k ∈ [K], and thus equal to their class mean hn := 1K ∑K k=1 h (k) n .
(ii) The class means {hn}Nn=1 have equal norms and form an (entry-wise) non-negative orthogonal system.
(iii) Let P h⊥ be the projection upon the subspace of RM orthogonal to h = 1N ∑N
n=1 hn. Then for every n ∈ [N ], it holds wn = CPh⊥hn for some constant C independent of n.
Our main theorem in this section can be represented as follows. Theorem 3.2. Let M ≥ N , α ∈ [0, 1). Assume that N−1N α + 2 √ (N − 1)λWλH < 1. Then any global minimizer of the problem (Pα) is a NC configuration.
Note that the NC configurations defined in Definition 3.1 above differ significantly from the ones specified in other works, e.g. Fang et al. (2021); Zhu et al. (2021); Zhou et al. (2022b) or Tirer & Bruna (2022), see Appendix B.1 for more discussion.
4 THE MEMORIZATION-DILATION MODEL
4.1 EXPERIMENTAL MOTIVATION
Previous studies of the NC phenomenon mainly focus on the collapsing variability of training activations, and make rather cautious statements about its effects on generalization. For instance, Papyan et al. (2020) report slightly improved test accuracies for training beyond zero training error. Going a step further, Zhu et al. (2021) show that the NC phenomenon also happens for overparameterized models when labels are completely randomized. Here, the models seem to memorize by overfitting the data points, however, a rigorous study how label corruption affects generalization in the regime of NC is still lacking.
To fill the gap, we advocate to analyze the effects of label corruption in the training data on the (previously unseen) test instead of the training feature collapse. Eventually, tight test class clusters go hand in hand with easier separation of the instances and, thus, a smaller generalization error. Following Zhu et al. (2021), we measure the collapse of the penultimate layer activations by the NC1 metric. This metric depicts the relative magnitude of the within-class covariance ΣW with respect to the between-class covariance ΣB of the penultimate layer features and is defined as
NC1 := 1
N trace(ΣWΣ
† B), (2)
where
ΣW := 1
NK N∑ n=1 K∑ k=1 (h(k)n − hn)(h(k)n − hn)⊤ ∈ RM×M ,
ΣB := 1
N N∑ n=1 (hn − h)(hn − h)⊤ ∈ RM×M ,
and Σ†B denotes the pseudo-inverse of ΣB . Here, we adopt the notations from Section 1: h (k) n ∈ RM denotes the feature representation of k-th sample in class n, hn the class mean and h the global mean. Moreover, we distinguish NC train1 and NC test 1 to be calculated on the training and test instances, respectively. We call NC test1 dilation. Let us now turn to the notion of memorization, which is not uniquely defined in deep learning literature. Here, we define memorization in the context of the NC setting and in a global manner, different from other works, e.g. Feldman & Zhang (2020). Formally, suppose that label noise is incorporated by (independently) corrupting the instance of each class label n in the training data with probability η ∈ (0, 1), where corruption means drawing a label uniformly at random from the label space Y . We denote the set of corrupted instances by [K̃]. For a given dataset D (with label noise η), we define memorization as
mem := N∑
n=1 ∑ k∈[K̃] ∥h(k)n − h∗n∥2 , (3)
where h∗n denotes the mean of (unseen) test instances belonging to class n.
We call the original ground truth label of a sample its true label. We call the label after corruption, which may be the true label or not, the observed label. Since instances of the same true label tend to have similar input features in some sense, the network is biased to map them to similar feature representations. Instances are corrupted randomly, and hence, instances of the same true label but different observed labels do not have predictable characteristics that allow the network to separate them in a way that can be generalized. When the network nevertheless succeeds in separating such instances, we say that the network memorized the feature representations of the corrupted instances in the training set. The metric mem in (3) thus measures memorization. The above memorization also affects dilation. Indeed, the network uses the feature engineering part to embed samples of similar features (that originally came from the same class), to far apart features, that encode different labels. Such process degrades the ability of the network to embed samples consistently, and leads to dilation.
To quantify the interaction between mem and NC test1 , we analyzed the learned representations h in the penultimate layer feature space for different noise configurations. One may wonder whether one can see a systematic trend in the test collapse given the memorization, and how this evolves over different loss functions.
To this end, we trained simple multi-layer neural networks for two classes (N = 2), which we subsampled from the image classification datasets MNIST LeCun et al. (1998), FashionMNIST Xiao et al. (2017), CIFAR-10 Krizhevsky & Hinton (2009) and SVHN Netzer et al. (2011). The labels are corrupted with noise degrees η ∈ [0.025, 0.4]. The network consists of 9 hidden layers with 2048 neurons each, thus, it represents a vastly overparameterized model. The feature dimension M is set to the number of classes N . We trained these networks using the CE and LS loss with a smoothing factor α = 0.1, as well as the mean-squared error (MSE). Moreover, we consider label relaxation (LR) Lienen & Hüllermeier (2021) as a generalization to LS with a relaxation degree α = 0.1. The networks were trained until convergence in 200 epochs (where the last 50 epochs did not make any significant changes) using SGD with an initial learning rate of 0.1 multiplied by 0.1 each 40 epochs and a small weight decay of 0.001. Moreover, we considered ReLU as activation function throughout the network, as well as batch normalization in each hidden layer. A linear softmax classifier is composed on the encoder. We conducted each experiment ten times with different seeds.
The results for the above experimental setting are shown in Fig. 1, in which one can observe the trends of √ NC test1 per memorization for various configurations. As can be seen, the figure shows an
approximately linear correspondence between √
NC test1 and mem for the CE derivatives (CE and LS) on all datasets when mem is not large.
0.0 0.5 1.0 1.5 2.0 0.1
0.2
0.3
0.4
0.5
0.6
mnist
CE LS LR MSE
0.0 0.5 1.0 1.5 2.0
0.2
0.3
0.4
0.5
fashionmnist
CE LS LR MSE
0.0 0.5 1.0 1.5 0.45
0.50
0.55
0.60
0.65
0.70
cifar10
CE LS LR MSE
0.0 0.5 1.0 1.5 2.0
0.3
0.4
0.5
0.6
svhn
CE LS LR MSE
0.05
0.10
0.15
0.20
0.25
0.30
0.35
No ise
D eg
re e
Test Collapse per Memorization (N = 2)
(NK) 1 n [N]k [K] |h(k)n h *n |2 (Memorization)
te st 1 (D
ila tio
n)
0.0 0.5 1.0 1.5 2.0
85
90
95
100 mnist
CE LS LR MSE
0.0 0.5 1.0 1.5 2.0 88
90
92
94
96
98
fashionmnist
CE LS LR MSE
0.0 0.5 1.0 1.5
84
86
88
90
cifar10
CE LS LR MSE
0.0 0.5 1.0 1.5 2.0
86
88
90
92
94
96 svhn
CE LS LR MSE
0.05
0.10
0.15
0.20
0.25
0.30
0.35
No ise
D eg
re e
Test Accuracy per Memorization (N = 2)
(NK) 1 n [N]k [K] |h(k)n h *n |2 (Memorization)
Te st
A cc
ur ac
y
Figure 1: Feature collapse of the test instances in terms of
√
NC test1 per memorization (top row) and the resulting
test accuracies (bottom row) averaged over ten random seeds. Comparing the markers of the same color, it can be observed that LS consistently performs better than CE across all datasets, with very few exceptions (the very low noise degrees in cifar10).
Moreover, as CE and LS share the same slope, these results suggest that the degradation of the test collapse (aka dilation) is a function of memorization and the network expressitivity, and not of the choice of the loss. The loss only affects how the noise translates to memorization, but not how memorization translates to dilation. Even though the same amount of noise is mapped to different memorization values in CE and LS, the memorization-dilation curve is nevertheless shared between CE and LS. Hence, since LS leads the network to memorize less, it results in improved performance (cf. Fig. 1). We can further see that MSE and LR show a different memorization-dilation correspondence, which means that these losses affect the inductive bias in a different way than CE and LS.
We repeated the experiments for different values of the feature dimension M and show the example results in Fig. 2. Here, one can see the similar trends of dilation per memorization as before. In the appendix, we provide additional results showing the behavior in the multi-class case N > 2 with different models for label noise. The results support our MD model, and show that the memorizationdilation curve is roughly independent of the noise model for low-to-mid noise levels.
4.2 THE MEMORIZATION-DILATION MODEL
Motivated by the observations of the previous experiments, we propose the so-called memorizationdilation (MD) model, which extends the unconstrained feature model by incorporating the interaction between memorization and dilation as a model assumption. By this, we explicitly capture the limited expressivity of the network, thereby modeling the inductive bias of the underlying model.
This model shall provide a basis to mathematically characterize the difference in the learning behavior of CE and LS. More specifically, we would like to know why LS shows improved generalization performance over conventional CE, as was observed in past works Müller et al. (2019). The main idea can be explained as follows. We first note that dilation is directly linked to generalization (see also Kornblith et al. (2021)), since the more concentrated the feature representations of each class are, the easier it is to separate the different classes with a linear classifier without having outliers crossing the decision boundary. The MD model asserts that dilation is a linear function of memorization. Hence, the only way that LS can lead to less dilation than CE, is if LS memorizes less than CE. Hence, the goal in our analysis is to show that, under the MD model, LS indeed leads to less memorization than CE. Note that this description is observed empirically in the experiments of Section 4.1.
Next we define the MD model in the binary classification setting. Definition 4.1. We call the following minimization problem MD. Minimize the MD risk
Rλ,η,α(U , r) := Fλ,α(W ,H, r) + ηGλ,α(W ,U , r), with respect to the noisy feature embedding U = [u1,u2] ∈ R2×M+ and the standard deviation r ≥ 0, under the constraints
η ∥h1 − u2∥ ≤ CMDr
∥h1 − h2∥ (4)
η ∥h2 − u1∥ ≤ CMDr
∥h1 − h2∥ . (5)
Here,
• H ∈ R2×M+ and W ∈ RM×2 form an NC configuration (see Definition 3.1).
• CMD > 0 is called the memorization-dilation slope, 0 ≤ α < 1 is called the LS parameter, η > 0 the noise level, and λ > 0 the regularization parameter.
• Fλ,α is the component in the (regularized) risk that is associated with the correctly labeled samples,
Fλ,α(W ,H, r) := ∫ ( ℓα ( W ,h1 + v,y (α) 1 ) + λ ∥h1 + v∥2 ) dµ1r(v)
+ ∫ ( ℓα ( W ,h2 + v,y (α) 2 ) + λ ∥h2 + v∥2 ) dµ2r(v)
where µ1r and µ 2 r are some probability distributions with mean 0 and standard deviation r, and lα is the LS loss defined in (1).
• Gλ,α is the component in the (regularized) risk that is associated with the corrupted samples, defined as
Gλ,α(W ,U , r) = ℓα ( W ,u1,y (α) 1 ) + ℓα ( W ,u2,y (α) 2 ) + λ ∥u1∥2 + λ ∥u2∥2 .
The MD model can be interpreted as follows. First we consider the feature representations of the correctly labeled samples in each class as samples from a distribution (namely µ1,2r in Def. 4.1) with standard deviation r, a parameter that measures the dilation of the class cluster. In a natural way, the corresponding risk Fλ,α involves the loss average over all samples, i.e. the loss integral over the distribution. For simplicity, we assume that the class centers h1,h2 as well as the weight matrix W are fixed as described by the NC configuration. This is a reasonable simplification as it has been always observed in the experiments.
On the other hand, the feature representations of corrupted samples are u1 and u2.2 The amount of memorization in the first class is defined to be η∥h2 − u1∥, since the more noise η there is, the more
2Certainly one can, instead of two single points u1 and u2, two distributions centered around u1 and u2, similarly as before for uncorrupted samples. However, it is quite straightforward to see that the minimization of the MD risk over the dilation of these two distributions is independent of other variables (not like r), and thus the minimum should be attained in the case of collapsing into two single points. Thus, for convenience we assume directly here that Gλ,α involves only two single points.
examples we need to memorize. The amount of memorization in the second class is defined the same way. The (normalized) dilation is defined to be r∥h1−h2∥ , which models a similar quantity to (2).
h
(k) 2 are test images correctly labeled as 1, with centroid h2. The centroid of the test images with correct label 0 is h1. The centroid of training images which were originally labeled as 1 but are mislabeled as 0 is u1. The memorization of u1 moves it close to h1, and causes dilation of the instances h
(k) 2 .
The constraints (4) and (5) tell us that in order to map noisy samples u1 away from h2, we have to pay with dilation r. The larger r is, the further away we can map u1 from h2. The correspondence between memorization and dilation is linear with slope CMD by assumption. There are two main forces in the optimization problem: u1 would like to be as close as possible to its optimal position h1, and similarly u2 likes to be close to h2. In view of the constraints (4) and (5), to achieve this, r has to be increased to rmax := η∥h1−h2∥2
CMD . On the other hand, the optimal r for the term Fλ,α is r = 0, namely, the layer-peeled NC configuration. An optimal solution hence balances between memorization and dilation. See Fig. 3 for a visualization of the MD model.
Our goal in this section is to compare the optimal value r in case of LS and CE losses. We will distinguish between these two cases by setting the value of α in the MD model to 0 for CE and to some α0 > 0 for LS. This will result in two different scales of the feature embeddings H , denoted by HCE and HLS for CE and LS loss respectively, with the ratio
γ := ∥∥HCE∥∥ / ∥∥HLS∥∥ > 1, (6)
which holds under the reasonable assumption that the LS technique is sufficiently effective, or more precisely α0 > 2 √ λWλH .
The main result in this section will be Theorem 4.3, which states informally that in the low noise regime, the optimal dilation in case of LS loss is smaller than that in case of CE loss. Before presenting this theorem, we will first establish several assumptions on the distributions µ1,2r and the noise η in Assumption 4.2. Basically we allow a rich class of distributions and only require certain symmetry and bounded supports in terms of r, as well as require η to be small in terms of the ratio γ.
Assumption 4.2.
1. Let α0 > 0. We assume that the solution of
min W ,H ℓα
( W ,h1,y (α) 1 ) + ℓα ( W ,h2,y (α) 2 ) + λW ∥W ∥2 + λH ∥H∥2
s.t. H ≥ 0 .
is given by (W ,H) = (WCE ,HCE) for α = 0 and (W ,H) = (WLS ,HLS) for α = α0.
2. Assume that the distributions µ1r and µ 2 r are centered, in the sense that∫ ⟨w2 −w1,v⟩ dµ1r(v) = ∫
⟨w1 −w2,v⟩ dµ2r(v) = 0 ,∫ ⟨h1,v⟩ dµ1r(v) = ∫ ⟨h2,v⟩ dµ2r(v) = 0 .
Furthermore, we assume that there exists a constant A > 0 such that ∥v∥ ≤ Ar for any vector v that lies in the support of µ1r or in the support of µ 2 r .
3. Assume that the noise level η and the LS parameter α0 satisfy the following. We suppose α0 > 4 √ λWλH , which guarantees γ := ∥∥HCE∥∥ / ∥∥HLS∥∥ > 1. We moreover suppose that η is sufficiently small to guarantee η1/2 < C̃(1− 1γ ) where C̃ :=
CMD√ 2∥hCE1 −hCE2 ∥ .
Now our main result in this section can be formally stated as below.
Theorem 4.3. Suppose that Assumption 4.2 holds true for M ≥ N = 2 and λ := λH . Let rCE∗ and rLS∗ be the optimal dilations, i.e. the optimum r in the MD problem, corresponding to the CE and LS loss (accordingly α = 0 and α = α0), respectively. Then it holds that
rCE∗∥∥hCE1 − hCE2 ∥∥ > r LS ∗∥∥hLS1 − hLS2 ∥∥ .
Theorem 4.3 reveals a mechanism by which LS achieves better generalization than CE. It is proven that LS memorizes and dilates less than CE, which is associated with better generalization. Note that in practice, the data often have noise in the sense that not all examples are perfectly labeled. More importantly, examples from different classes may share many similarities, a situation that is also covered by the MD model: the feature representations of samples from those classes are biased toward each other. In this case, LS also leads to decreased dilation which corresponds to better class separation and higher performance Kornblith et al. (2021).
Interestingly, the concurrent work Zhou et al. (2022b) has shown that in the noiseless setting CE and LS lead to largely identical test accuracy, which seems to contradict the statement that LS performs better claimed by our work as well as many others, e.g. Kornblith et al. (2021); Müller et al. (2019). However, note that Zhou et al. (2022b) requires the network to be sufficiently large so that it has enough expressive power to fit the underlying mapping from input data to targets, as well as to be trained until convergence. While the latter is easy to obtain, it is difficult even to check if the first requirement holds. The difference between the two results is hence possibly caused by the effect of noise and by the network expressivity: while we aim to model the limited expressivity by the MD relation, Zhou et al. (2022b) focuses on networks with approximately infinite expressivity.
The MD model combines a statistical term Fλ,α, that describes the risk over the distribution of feature embeddings of samples with clean labels, and an empirical term ηGλ,α that describes the risk over training samples with noisy labels. One point of view that can motivate such a hybrid statistical-empirical definition is the assumption that the network only memorizes samples of noisy labels, but not samples of clean labels. Such a memorization degrades (dilates) both the collapse of the training and test samples, possibly with different memorization-dilation slopes. However, memorization is not limited to corrupted labels, but can also apply to samples of clean labels Feldman & Zhang (2020), by which the learner can partially negate the dilation of the training features (but not test features). The fact that our model does not take the memorization of clean samples into account is one of its limitations. We believe that future work should focus on modeling the total memorization of all examples. Nevertheless, we believe that our current MD model has merit, since 1) noisy labels are memorized more than clean labels, and especially in the low noise regime the assumption of observing memorization merely for corrupted labels appears reasonable, and 2) our approach and proof techniques can be the basis of more elaborate future MD models.
5 CONCLUSION
In this paper, we first characterized the global minimizers of the Layer-Peeled Model (or the Unconstrained Features Model) with the positivity condition on the feature representations. Our characterization shows some distinctions from the results that haven been obtained in recent works for the same model without feature positivity. Besides the conventional cross-entropy (CE) loss, we studied the model in case of the label smoothing (LS) loss, showing that NC also occurs when applying this technique.
Then we extended the model to the so-called Memorization-Dilation (MD) Model by incorporating the limited expressivity of the network. Using the MD model, which is supported by our experimental observations, we show that when trained with the LS loss, the network memorizes less than when trained by the CE loss. This poses one explanation to the improved generalization performance of the LS technique over the conventional CE loss.
Our model has limitations, however, namely that it is limited to the case of two classes. Motivated by promising results on the applicability of our model to the multi-class setting, we believe that future work should focus on extending the MD model in this respect. With such extensions, memorizationdilation analysis has the potential to underlie a systematic comparison of the generalization capabilities of different losses, such as CE, LS, and label relaxation, by analytically deriving formulas for the amount of memorization associated with each loss.
ACKNOWLEDGMENTS
This work was partially supported by the German Research Foundation (DFG) within the Collaborative Research Center “On-The-Fly Computing” (CRC 901 project no. 160364472). Moreover, the authors gratefully acknowledge the funding of this project by computing time provided by the Paderborn Center for Parallel Computing (PC2). | 1. What is the main contribution of the paper regarding label smoothing loss and its solution structure?
2. What are the strengths and weaknesses of the paper's analysis and explanation of the better generalization of LS loss than CE loss under label corruption?
3. Do you have any concerns or confusion regarding the positivity constraint, the memorization-dilation model, and the equation in (NC2)?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content after the rebuttal? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
This study gives the solution structure of the label smoothing (LS) loss with positivity constraint, which is an orthogonal variant of the original neural collapse configurations. The paper further explores why LS loss has a better generalization than CE loss when label corruption emerges based on the memorization-dilation model. It reveals that LS loss induces less memorization, so leads to less dilation, which explains its better generalization.
Strengths And Weaknesses
Strengths:
The study gives the solution structure of the LS loss, whereas previous studies mainly focus on the original CE loss. The study gives an explanation of the better generalization of LS loss than CE loss under label corruption.
Weaknesses:
Misleading description. The authors claim to address the model with noisy data. However, they actually only consider label corruption. Noisy data does not equivalently refer to label noise. A more precise description should be adopted.
The positivity constraint is confusing. Although the last layer will have positive feature when ReLU is performed, a neural network does not necessarily end with a ReLU activation in most cases. If an identity connection or a BN layer is appended, we can easily get rid of the positivity constraint. More importantly, I do not see a necessary connection between the positivity constraint and the later analysis of label corruption. Is your result (LS loss shows less memorization and dilation) valid only when the positivity constraint is accompanied?
The claim that less dilation leads to better generalization lacks rigorous support. As indicated by your definition, dilation reflects the compactness and separation of test features. But generalization ability seems to be more related to loss and accuracy on test set. So, a more rigorous relation between “dilation” and the “generalization” in your context should be constructed. Otherwise, the claim seems to be groundless.
The memorization-dilation model is confusing. First, it only considers two classes, which is unrealistic. Why do the authors only consider two classes? I do not think it would be a simple extension by generalizing the two-class result into multiple classes. Besides, why are H and W fixed as the optimal solution in the memorization-dilation model? Its motivation and rationality are unclear and need more discussion.
The second equation in (NC2) in page2 is wrong.
I suggest that the authors consider more cases in Theorem 3.2. It would be better if the authors could first give the solution structure of the LS loss without positivity constraint, and the CE loss with the positivity constraint, and then deal with the LS loss with positivity constraint as stated in Theorem 3.2.
------ After rebuttal
The authors address most of my questions and concerns. I increase my score to 6.
Clarity, Quality, Novelty And Reproducibility
The study focuses on an important problem. The result is somewhat inspiring and interesting. The clarity of this paper needs to be improved, especially for the structure. |
ICLR | Title
Memorization-Dilation: Modeling Neural Collapse Under Noise
Abstract
The notion of neural collapse refers to several emergent phenomena that have been empirically observed across various canonical classification problems. During the terminal phase of training a deep neural network, the feature embedding of all examples of the same class tend to collapse to a single representation, and the features of different classes tend to separate as much as possible. Neural collapse is often studied through a simplified model, called the layer-peeled model, in which the network is assumed to have “infinite expressivity” and can map each data point to any arbitrary representation. In this work we study a more realistic variant of the layer-peeled model, which takes the positivity of the features into account. Furthermore, we extend this model to also incorporate the limited expressivity of the network. Empirical evidence suggests that the memorization of noisy data points leads to a degradation (dilation) of the neural collapse. Using a model of the memorization-dilation (M-D) phenomenon, we show one mechanism by which different losses lead to different performances of the trained network on noisy data. Our proofs reveal why label smoothing, a modification of cross-entropy empirically observed to produce a regularization effect, leads to improved generalization in classification tasks.
N/A
The notion of neural collapse refers to several emergent phenomena that have been empirically observed across various canonical classification problems. During the terminal phase of training a deep neural network, the feature embedding of all examples of the same class tend to collapse to a single representation, and the features of different classes tend to separate as much as possible. Neural collapse is often studied through a simplified model, called the layer-peeled model, in which the network is assumed to have “infinite expressivity” and can map each data point to any arbitrary representation. In this work we study a more realistic variant of the layer-peeled model, which takes the positivity of the features into account. Furthermore, we extend this model to also incorporate the limited expressivity of the network. Empirical evidence suggests that the memorization of noisy data points leads to a degradation (dilation) of the neural collapse. Using a model of the memorization-dilation (M-D) phenomenon, we show one mechanism by which different losses lead to different performances of the trained network on noisy data. Our proofs reveal why label smoothing, a modification of cross-entropy empirically observed to produce a regularization effect, leads to improved generalization in classification tasks.
1 INTRODUCTION
The empirical success of deep neural networks has accelerated the introduction of new learning algorithms and triggered new applications, with a pace that makes it hard to keep up with profound theoretical foundations and insightful explanations. As one of the few yet particularly appealing theoretical characterizations of overparameterized models trained for canonical classification tasks, Neural Collapse (NC) provides a mathematically elegant formalization of learned feature representations Papyan et al. (2020). To explain NC, consider the following setting. Suppose we are given a balanced dataset D ={ (x (k) n , yn) } k∈[K],n∈[N ] ⊂ X × Y in the instance space X = Rd and label space Y = [N ] := {1, . . . , N}, i.e. each class n ∈ [N ] has exactly K samples x(1)n , . . . ,x(K)n . We consider network architectures commonly used in classification tasks that are composed of a feature engineering part g : X → RM (which maps an input signal x ∈ X to its feature representation g(x) ∈ RM ) and a linear classifier W (·) + b given by a weight matrix W ∈ RN×M as well as a bias vector b ∈ RN . Let wn denote the row vector of W associated with class n ∈ [N ]. During training, both classifier components are simultaneously optimized by minimizing the cross-entropy loss.
*These authors contributed equally to this work.
Denoting the feature representations g(x(k)n ) of the sample x (k) n by h (k) n , the class means and the global mean of the features by
hn := 1
K K∑ i=1 h(k)n , h := 1 N N∑ n=1 hn,
NC consists of the following interconnected phenomena (where the limits take place as training progresses):
(NC1) Variability collapse. For each class n ∈ [N ], we have 1K ∑K k=1 ∥∥∥h(k)n − hn∥∥∥2 → 0 . (NC2) Convergence to simplex equiangular tight frame (ETF) structure. For any m,n ∈ [N ]
with m ̸= n, we have
∥hn − h∥2 − ∥hm − h∥2 → 0, and〈 hn − h
∥hn − h∥2 , hm − h ∥hm − h∥2
〉 → − 1
N − 1 .
(NC3) Convergence to self-duality. For any n ∈ [N ], it holds
hn − h ∥hn − h∥2 − wn ∥wn∥2 → 0 .
(NC4) Simplification to nearest class center behavior. For any feature representation u ∈ RM , it holds
argmax n∈[N ] ⟨wn,u⟩+ bn → argmin n∈[N ] ∥u− hn∥2 .
In this paper, we consider a well known simplified model, in which the features h(k)n are not parameterized by the feature engineering network g but are rather free variables. This model is often referred to as layer-peeled model or unconstrained features model, see e.g. Lu & Steinerberger (2020); Fang et al. (2021); Zhu et al. (2021). However, as opposed to those contributions, in which the features h(k)n can take any value in RM , we consider here the case h(k)n ≥ 0 (understood componentwise). This is motivated by the fact that features are typically the outcome of some non-negative activation function, like the Rectified Linear Unit (ReLU) or sigmoid. Moreover, by incorporating the limited expressivity of the network to the layer-peeled model, we propose a new model, called memorization-dilation (MD). Given such model assumptions, we formally prove advantageous effects of the so-called label smoothing (LS) technique Szegedy et al. (2015) (training with a modification of cross-entropy (CE) loss), in terms of generalization performance. This is further confirmed empirically.
2 RELATED WORK
Studying the nature of neural network optimization is challenging. In the past, a plethora of theoretical models has been proposed to do so Sun (2020). These range from analyzing simple linear Kunin et al. (2019); Zhu et al. (2020); Laurent & von Brecht (2018) to non-linear deep neural networks Saxe et al. (2014); Yun et al. (2018). As one prominent framework among others, Neural Tangent Kernels Jacot et al. (2018); Roberts et al. (2021), where neural networks are considered as linear models on top of randomized features, have been broadly leveraged for studying deep neural networks and their learning properties.
Many of the theoretical properties of deep neural networks in the regime of overparameterization are still unexplained. Nevertheless, certain peculiarities have emerged recently. Among those, socalled “benign overfitting” Bartlett et al. (2019); Li et al. (2021), where deep models are capable of perfectly fitting potentially noisy data by retaining accurate predictions, has recently attracted attention. Memorization has been identified as one significant factor contributing to this effect Arpit et al. (2017); Sanyal et al. (2021), which also relates to our studies. Not less interesting, the learning risk of highly-overparameterized models shows a double-descent behavior when varying the model
complexity Nakkiran et al. (2020) as yet another phenomenon. Lastly, the concept of NC Papyan et al. (2020) has recently shed light on symmetries in learned representations of overparameterized models.
After laying the foundation of a rigorous mathematical characterization of the NC phenomenon by Papyan et al. (2020), several follow-up works have broadened the picture. As the former proceeds from studying CE loss, the collapsing behavior has been investigated for alternative loss functions. For instance, squared losses have shown similar collapsing characteristics Poggio & Liao (2020; 2021), and have paved the way for more opportunities in its mathematical analysis, e.g., by an NC-interpretable decomposition Han et al. (2021). More recently, Kornblith et al. (2021) provide an exhaustive overview over several commonly used loss functions for training deep neural networks regarding their feature collapses.
Besides varying the loss function, different theoretical models have been proposed to analyze NC. Most prominently, unconstrained feature models have been considered, which characterize the penultimate layer activations as free optimization variables Mixon et al. (2020); Lu & Steinerberger (2020); E & Wojtowytsch (2021). This stems from the assumption that highly overparameterized models can approximate any patterns in the feature space. While unconstrained features models typically only look at the last feature encoder layer, layer-peeling allows for “white-boxing” further layers before the last one for a more comprehensive theoretical analysis Fang et al. (2021). Indeed, this approach has been applied in Tirer & Bruna (2022), which namely extends the unconstrained features model by one layer as well as the ReLU nonlinearity. On the other hand, Zhu et al. (2021), Ji et al. (2021) and Zhou et al. (2022a) extend the unconstrained features model analysis by studying the landscape of the loss function therein and the related training dynamics. Beyond unconstrained features models, Ergen & Pilanci (2021) introduce a convex analytical framework to characterize the encoder layers for a more profound understanding of the NC phenomenon. Referring to the implications of NC on our understanding of neural networks, Hui et al. (2022) and Galanti et al. (2021) discuss the impact of NC on test data in the sense of generalization and transfer learning. Finally, Kothapalli et al. (2022) provides a multifaceted survey of recent works related to NC.
3 LAYER-PEELED MODEL WITH POSITIVE FEATURES
As a prerequisite to the MD model, in this section we introduce a slightly modified version of the layer-peeled (or unconstrained features) model (see e.g. Zhu et al. (2021); Fang et al. (2021)), in which the features have to be positive. Accordingly, we will show that the global minimizers of the modified layer-peeled model correspond to an NC configuration, which differs from the global minimizers specified in other works and captures more closely the NC phenomenon in practice.
For conciseness, we denote by H the matrix formed by the features h(k)n , n ∈ [N ], k ∈ [K] as columns, and define ∥W ∥ and ∥H∥ to be the Frobenius norm of the respective matrices, i.e. ∥W ∥2 = ∑N n=1 ∥wn∥ 2 and ∥H∥2 = ∑K k=1 ∑N n=1 ∥∥∥h(k)n ∥∥∥2. We consider the regularized version of the model (instead of the norm constraint one as in e.g. Fang et al. (2021)) 1
min W ,H Lα(W ,H) := Lα(W ,H) + λW ∥W ∥2 + λH K ∥H∥2
s.t. H ≥ 0, (Pα)
where λW , λH > 0 are the penalty parameters for the weight decays. By Lα we denote empirical risk with respect to the LS loss with parameter α ∈ [0, 1), where α = 0 corresponds to the conventional CE loss. More precisely, given a value of α, the LS technique then defines the label assigned to class n ∈ [N ] as the following probability vector:
y(α)n = (1− α)en + α
n 1N ∈ [0, 1]N ,
where en ∈ RN denotes the n-th standard basis vector and 1N ∈ RN denotes the vector consisting of only ones. Let p : RM → RN be the function that assigns to each feature representation z ∈ RM
1Note that for simplicity we assume that the last layer does not have bias terms, i.e. b = 0. The result can be however easily extended to the more general case when the biases do not vanish. Namely, in presence of bias terms, the statement of Theorem 3.2 and also its proof remain unchanged.
the probability scores of the classes (as a probability vector in RN ),
pW (z) := softmax(Wz) := [ e⟨wm,z⟩∑N i=1 e ⟨wi,z⟩ ]N m=1 ∈ [0, 1]N .
Then the LS loss corresponding to a sample in class n ∈ [N ] is given by
ℓα(W , z,y (α) n ) := 〈 −y(α)n , log pW (z) 〉 := N∑ m=1 −y(α)nm log ( pW (z)m ) (1)
and the LS empirical risk Lα is defined as
Lα(W ,H) = 1
NK K∑ k=1 N∑ n=1 ℓα ( W ,h(k)n ,y (α) n ) .
We will show that in common settings, the minimizers of (Pα ) correspond to neural collapse (NC) configurations, which we formalize in Def. 3.1 below. Definition 3.1 (NC configurations). Let K,M,N ∈ N, M ≥ N . A pair (W ,H) of a weight matrix formed by rows wn ∈ RM and a feature matrix formed by columns h(k)n ∈ RM+ (with n ∈ [N ], k ∈ [K]) is said to be a NC configuration if
(i) The feature representations h(k)n within every class n ∈ [N ] are equal for all k ∈ [K], and thus equal to their class mean hn := 1K ∑K k=1 h (k) n .
(ii) The class means {hn}Nn=1 have equal norms and form an (entry-wise) non-negative orthogonal system.
(iii) Let P h⊥ be the projection upon the subspace of RM orthogonal to h = 1N ∑N
n=1 hn. Then for every n ∈ [N ], it holds wn = CPh⊥hn for some constant C independent of n.
Our main theorem in this section can be represented as follows. Theorem 3.2. Let M ≥ N , α ∈ [0, 1). Assume that N−1N α + 2 √ (N − 1)λWλH < 1. Then any global minimizer of the problem (Pα) is a NC configuration.
Note that the NC configurations defined in Definition 3.1 above differ significantly from the ones specified in other works, e.g. Fang et al. (2021); Zhu et al. (2021); Zhou et al. (2022b) or Tirer & Bruna (2022), see Appendix B.1 for more discussion.
4 THE MEMORIZATION-DILATION MODEL
4.1 EXPERIMENTAL MOTIVATION
Previous studies of the NC phenomenon mainly focus on the collapsing variability of training activations, and make rather cautious statements about its effects on generalization. For instance, Papyan et al. (2020) report slightly improved test accuracies for training beyond zero training error. Going a step further, Zhu et al. (2021) show that the NC phenomenon also happens for overparameterized models when labels are completely randomized. Here, the models seem to memorize by overfitting the data points, however, a rigorous study how label corruption affects generalization in the regime of NC is still lacking.
To fill the gap, we advocate to analyze the effects of label corruption in the training data on the (previously unseen) test instead of the training feature collapse. Eventually, tight test class clusters go hand in hand with easier separation of the instances and, thus, a smaller generalization error. Following Zhu et al. (2021), we measure the collapse of the penultimate layer activations by the NC1 metric. This metric depicts the relative magnitude of the within-class covariance ΣW with respect to the between-class covariance ΣB of the penultimate layer features and is defined as
NC1 := 1
N trace(ΣWΣ
† B), (2)
where
ΣW := 1
NK N∑ n=1 K∑ k=1 (h(k)n − hn)(h(k)n − hn)⊤ ∈ RM×M ,
ΣB := 1
N N∑ n=1 (hn − h)(hn − h)⊤ ∈ RM×M ,
and Σ†B denotes the pseudo-inverse of ΣB . Here, we adopt the notations from Section 1: h (k) n ∈ RM denotes the feature representation of k-th sample in class n, hn the class mean and h the global mean. Moreover, we distinguish NC train1 and NC test 1 to be calculated on the training and test instances, respectively. We call NC test1 dilation. Let us now turn to the notion of memorization, which is not uniquely defined in deep learning literature. Here, we define memorization in the context of the NC setting and in a global manner, different from other works, e.g. Feldman & Zhang (2020). Formally, suppose that label noise is incorporated by (independently) corrupting the instance of each class label n in the training data with probability η ∈ (0, 1), where corruption means drawing a label uniformly at random from the label space Y . We denote the set of corrupted instances by [K̃]. For a given dataset D (with label noise η), we define memorization as
mem := N∑
n=1 ∑ k∈[K̃] ∥h(k)n − h∗n∥2 , (3)
where h∗n denotes the mean of (unseen) test instances belonging to class n.
We call the original ground truth label of a sample its true label. We call the label after corruption, which may be the true label or not, the observed label. Since instances of the same true label tend to have similar input features in some sense, the network is biased to map them to similar feature representations. Instances are corrupted randomly, and hence, instances of the same true label but different observed labels do not have predictable characteristics that allow the network to separate them in a way that can be generalized. When the network nevertheless succeeds in separating such instances, we say that the network memorized the feature representations of the corrupted instances in the training set. The metric mem in (3) thus measures memorization. The above memorization also affects dilation. Indeed, the network uses the feature engineering part to embed samples of similar features (that originally came from the same class), to far apart features, that encode different labels. Such process degrades the ability of the network to embed samples consistently, and leads to dilation.
To quantify the interaction between mem and NC test1 , we analyzed the learned representations h in the penultimate layer feature space for different noise configurations. One may wonder whether one can see a systematic trend in the test collapse given the memorization, and how this evolves over different loss functions.
To this end, we trained simple multi-layer neural networks for two classes (N = 2), which we subsampled from the image classification datasets MNIST LeCun et al. (1998), FashionMNIST Xiao et al. (2017), CIFAR-10 Krizhevsky & Hinton (2009) and SVHN Netzer et al. (2011). The labels are corrupted with noise degrees η ∈ [0.025, 0.4]. The network consists of 9 hidden layers with 2048 neurons each, thus, it represents a vastly overparameterized model. The feature dimension M is set to the number of classes N . We trained these networks using the CE and LS loss with a smoothing factor α = 0.1, as well as the mean-squared error (MSE). Moreover, we consider label relaxation (LR) Lienen & Hüllermeier (2021) as a generalization to LS with a relaxation degree α = 0.1. The networks were trained until convergence in 200 epochs (where the last 50 epochs did not make any significant changes) using SGD with an initial learning rate of 0.1 multiplied by 0.1 each 40 epochs and a small weight decay of 0.001. Moreover, we considered ReLU as activation function throughout the network, as well as batch normalization in each hidden layer. A linear softmax classifier is composed on the encoder. We conducted each experiment ten times with different seeds.
The results for the above experimental setting are shown in Fig. 1, in which one can observe the trends of √ NC test1 per memorization for various configurations. As can be seen, the figure shows an
approximately linear correspondence between √
NC test1 and mem for the CE derivatives (CE and LS) on all datasets when mem is not large.
0.0 0.5 1.0 1.5 2.0 0.1
0.2
0.3
0.4
0.5
0.6
mnist
CE LS LR MSE
0.0 0.5 1.0 1.5 2.0
0.2
0.3
0.4
0.5
fashionmnist
CE LS LR MSE
0.0 0.5 1.0 1.5 0.45
0.50
0.55
0.60
0.65
0.70
cifar10
CE LS LR MSE
0.0 0.5 1.0 1.5 2.0
0.3
0.4
0.5
0.6
svhn
CE LS LR MSE
0.05
0.10
0.15
0.20
0.25
0.30
0.35
No ise
D eg
re e
Test Collapse per Memorization (N = 2)
(NK) 1 n [N]k [K] |h(k)n h *n |2 (Memorization)
te st 1 (D
ila tio
n)
0.0 0.5 1.0 1.5 2.0
85
90
95
100 mnist
CE LS LR MSE
0.0 0.5 1.0 1.5 2.0 88
90
92
94
96
98
fashionmnist
CE LS LR MSE
0.0 0.5 1.0 1.5
84
86
88
90
cifar10
CE LS LR MSE
0.0 0.5 1.0 1.5 2.0
86
88
90
92
94
96 svhn
CE LS LR MSE
0.05
0.10
0.15
0.20
0.25
0.30
0.35
No ise
D eg
re e
Test Accuracy per Memorization (N = 2)
(NK) 1 n [N]k [K] |h(k)n h *n |2 (Memorization)
Te st
A cc
ur ac
y
Figure 1: Feature collapse of the test instances in terms of
√
NC test1 per memorization (top row) and the resulting
test accuracies (bottom row) averaged over ten random seeds. Comparing the markers of the same color, it can be observed that LS consistently performs better than CE across all datasets, with very few exceptions (the very low noise degrees in cifar10).
Moreover, as CE and LS share the same slope, these results suggest that the degradation of the test collapse (aka dilation) is a function of memorization and the network expressitivity, and not of the choice of the loss. The loss only affects how the noise translates to memorization, but not how memorization translates to dilation. Even though the same amount of noise is mapped to different memorization values in CE and LS, the memorization-dilation curve is nevertheless shared between CE and LS. Hence, since LS leads the network to memorize less, it results in improved performance (cf. Fig. 1). We can further see that MSE and LR show a different memorization-dilation correspondence, which means that these losses affect the inductive bias in a different way than CE and LS.
We repeated the experiments for different values of the feature dimension M and show the example results in Fig. 2. Here, one can see the similar trends of dilation per memorization as before. In the appendix, we provide additional results showing the behavior in the multi-class case N > 2 with different models for label noise. The results support our MD model, and show that the memorizationdilation curve is roughly independent of the noise model for low-to-mid noise levels.
4.2 THE MEMORIZATION-DILATION MODEL
Motivated by the observations of the previous experiments, we propose the so-called memorizationdilation (MD) model, which extends the unconstrained feature model by incorporating the interaction between memorization and dilation as a model assumption. By this, we explicitly capture the limited expressivity of the network, thereby modeling the inductive bias of the underlying model.
This model shall provide a basis to mathematically characterize the difference in the learning behavior of CE and LS. More specifically, we would like to know why LS shows improved generalization performance over conventional CE, as was observed in past works Müller et al. (2019). The main idea can be explained as follows. We first note that dilation is directly linked to generalization (see also Kornblith et al. (2021)), since the more concentrated the feature representations of each class are, the easier it is to separate the different classes with a linear classifier without having outliers crossing the decision boundary. The MD model asserts that dilation is a linear function of memorization. Hence, the only way that LS can lead to less dilation than CE, is if LS memorizes less than CE. Hence, the goal in our analysis is to show that, under the MD model, LS indeed leads to less memorization than CE. Note that this description is observed empirically in the experiments of Section 4.1.
Next we define the MD model in the binary classification setting. Definition 4.1. We call the following minimization problem MD. Minimize the MD risk
Rλ,η,α(U , r) := Fλ,α(W ,H, r) + ηGλ,α(W ,U , r), with respect to the noisy feature embedding U = [u1,u2] ∈ R2×M+ and the standard deviation r ≥ 0, under the constraints
η ∥h1 − u2∥ ≤ CMDr
∥h1 − h2∥ (4)
η ∥h2 − u1∥ ≤ CMDr
∥h1 − h2∥ . (5)
Here,
• H ∈ R2×M+ and W ∈ RM×2 form an NC configuration (see Definition 3.1).
• CMD > 0 is called the memorization-dilation slope, 0 ≤ α < 1 is called the LS parameter, η > 0 the noise level, and λ > 0 the regularization parameter.
• Fλ,α is the component in the (regularized) risk that is associated with the correctly labeled samples,
Fλ,α(W ,H, r) := ∫ ( ℓα ( W ,h1 + v,y (α) 1 ) + λ ∥h1 + v∥2 ) dµ1r(v)
+ ∫ ( ℓα ( W ,h2 + v,y (α) 2 ) + λ ∥h2 + v∥2 ) dµ2r(v)
where µ1r and µ 2 r are some probability distributions with mean 0 and standard deviation r, and lα is the LS loss defined in (1).
• Gλ,α is the component in the (regularized) risk that is associated with the corrupted samples, defined as
Gλ,α(W ,U , r) = ℓα ( W ,u1,y (α) 1 ) + ℓα ( W ,u2,y (α) 2 ) + λ ∥u1∥2 + λ ∥u2∥2 .
The MD model can be interpreted as follows. First we consider the feature representations of the correctly labeled samples in each class as samples from a distribution (namely µ1,2r in Def. 4.1) with standard deviation r, a parameter that measures the dilation of the class cluster. In a natural way, the corresponding risk Fλ,α involves the loss average over all samples, i.e. the loss integral over the distribution. For simplicity, we assume that the class centers h1,h2 as well as the weight matrix W are fixed as described by the NC configuration. This is a reasonable simplification as it has been always observed in the experiments.
On the other hand, the feature representations of corrupted samples are u1 and u2.2 The amount of memorization in the first class is defined to be η∥h2 − u1∥, since the more noise η there is, the more
2Certainly one can, instead of two single points u1 and u2, two distributions centered around u1 and u2, similarly as before for uncorrupted samples. However, it is quite straightforward to see that the minimization of the MD risk over the dilation of these two distributions is independent of other variables (not like r), and thus the minimum should be attained in the case of collapsing into two single points. Thus, for convenience we assume directly here that Gλ,α involves only two single points.
examples we need to memorize. The amount of memorization in the second class is defined the same way. The (normalized) dilation is defined to be r∥h1−h2∥ , which models a similar quantity to (2).
h
(k) 2 are test images correctly labeled as 1, with centroid h2. The centroid of the test images with correct label 0 is h1. The centroid of training images which were originally labeled as 1 but are mislabeled as 0 is u1. The memorization of u1 moves it close to h1, and causes dilation of the instances h
(k) 2 .
The constraints (4) and (5) tell us that in order to map noisy samples u1 away from h2, we have to pay with dilation r. The larger r is, the further away we can map u1 from h2. The correspondence between memorization and dilation is linear with slope CMD by assumption. There are two main forces in the optimization problem: u1 would like to be as close as possible to its optimal position h1, and similarly u2 likes to be close to h2. In view of the constraints (4) and (5), to achieve this, r has to be increased to rmax := η∥h1−h2∥2
CMD . On the other hand, the optimal r for the term Fλ,α is r = 0, namely, the layer-peeled NC configuration. An optimal solution hence balances between memorization and dilation. See Fig. 3 for a visualization of the MD model.
Our goal in this section is to compare the optimal value r in case of LS and CE losses. We will distinguish between these two cases by setting the value of α in the MD model to 0 for CE and to some α0 > 0 for LS. This will result in two different scales of the feature embeddings H , denoted by HCE and HLS for CE and LS loss respectively, with the ratio
γ := ∥∥HCE∥∥ / ∥∥HLS∥∥ > 1, (6)
which holds under the reasonable assumption that the LS technique is sufficiently effective, or more precisely α0 > 2 √ λWλH .
The main result in this section will be Theorem 4.3, which states informally that in the low noise regime, the optimal dilation in case of LS loss is smaller than that in case of CE loss. Before presenting this theorem, we will first establish several assumptions on the distributions µ1,2r and the noise η in Assumption 4.2. Basically we allow a rich class of distributions and only require certain symmetry and bounded supports in terms of r, as well as require η to be small in terms of the ratio γ.
Assumption 4.2.
1. Let α0 > 0. We assume that the solution of
min W ,H ℓα
( W ,h1,y (α) 1 ) + ℓα ( W ,h2,y (α) 2 ) + λW ∥W ∥2 + λH ∥H∥2
s.t. H ≥ 0 .
is given by (W ,H) = (WCE ,HCE) for α = 0 and (W ,H) = (WLS ,HLS) for α = α0.
2. Assume that the distributions µ1r and µ 2 r are centered, in the sense that∫ ⟨w2 −w1,v⟩ dµ1r(v) = ∫
⟨w1 −w2,v⟩ dµ2r(v) = 0 ,∫ ⟨h1,v⟩ dµ1r(v) = ∫ ⟨h2,v⟩ dµ2r(v) = 0 .
Furthermore, we assume that there exists a constant A > 0 such that ∥v∥ ≤ Ar for any vector v that lies in the support of µ1r or in the support of µ 2 r .
3. Assume that the noise level η and the LS parameter α0 satisfy the following. We suppose α0 > 4 √ λWλH , which guarantees γ := ∥∥HCE∥∥ / ∥∥HLS∥∥ > 1. We moreover suppose that η is sufficiently small to guarantee η1/2 < C̃(1− 1γ ) where C̃ :=
CMD√ 2∥hCE1 −hCE2 ∥ .
Now our main result in this section can be formally stated as below.
Theorem 4.3. Suppose that Assumption 4.2 holds true for M ≥ N = 2 and λ := λH . Let rCE∗ and rLS∗ be the optimal dilations, i.e. the optimum r in the MD problem, corresponding to the CE and LS loss (accordingly α = 0 and α = α0), respectively. Then it holds that
rCE∗∥∥hCE1 − hCE2 ∥∥ > r LS ∗∥∥hLS1 − hLS2 ∥∥ .
Theorem 4.3 reveals a mechanism by which LS achieves better generalization than CE. It is proven that LS memorizes and dilates less than CE, which is associated with better generalization. Note that in practice, the data often have noise in the sense that not all examples are perfectly labeled. More importantly, examples from different classes may share many similarities, a situation that is also covered by the MD model: the feature representations of samples from those classes are biased toward each other. In this case, LS also leads to decreased dilation which corresponds to better class separation and higher performance Kornblith et al. (2021).
Interestingly, the concurrent work Zhou et al. (2022b) has shown that in the noiseless setting CE and LS lead to largely identical test accuracy, which seems to contradict the statement that LS performs better claimed by our work as well as many others, e.g. Kornblith et al. (2021); Müller et al. (2019). However, note that Zhou et al. (2022b) requires the network to be sufficiently large so that it has enough expressive power to fit the underlying mapping from input data to targets, as well as to be trained until convergence. While the latter is easy to obtain, it is difficult even to check if the first requirement holds. The difference between the two results is hence possibly caused by the effect of noise and by the network expressivity: while we aim to model the limited expressivity by the MD relation, Zhou et al. (2022b) focuses on networks with approximately infinite expressivity.
The MD model combines a statistical term Fλ,α, that describes the risk over the distribution of feature embeddings of samples with clean labels, and an empirical term ηGλ,α that describes the risk over training samples with noisy labels. One point of view that can motivate such a hybrid statistical-empirical definition is the assumption that the network only memorizes samples of noisy labels, but not samples of clean labels. Such a memorization degrades (dilates) both the collapse of the training and test samples, possibly with different memorization-dilation slopes. However, memorization is not limited to corrupted labels, but can also apply to samples of clean labels Feldman & Zhang (2020), by which the learner can partially negate the dilation of the training features (but not test features). The fact that our model does not take the memorization of clean samples into account is one of its limitations. We believe that future work should focus on modeling the total memorization of all examples. Nevertheless, we believe that our current MD model has merit, since 1) noisy labels are memorized more than clean labels, and especially in the low noise regime the assumption of observing memorization merely for corrupted labels appears reasonable, and 2) our approach and proof techniques can be the basis of more elaborate future MD models.
5 CONCLUSION
In this paper, we first characterized the global minimizers of the Layer-Peeled Model (or the Unconstrained Features Model) with the positivity condition on the feature representations. Our characterization shows some distinctions from the results that haven been obtained in recent works for the same model without feature positivity. Besides the conventional cross-entropy (CE) loss, we studied the model in case of the label smoothing (LS) loss, showing that NC also occurs when applying this technique.
Then we extended the model to the so-called Memorization-Dilation (MD) Model by incorporating the limited expressivity of the network. Using the MD model, which is supported by our experimental observations, we show that when trained with the LS loss, the network memorizes less than when trained by the CE loss. This poses one explanation to the improved generalization performance of the LS technique over the conventional CE loss.
Our model has limitations, however, namely that it is limited to the case of two classes. Motivated by promising results on the applicability of our model to the multi-class setting, we believe that future work should focus on extending the MD model in this respect. With such extensions, memorizationdilation analysis has the potential to underlie a systematic comparison of the generalization capabilities of different losses, such as CE, LS, and label relaxation, by analytically deriving formulas for the amount of memorization associated with each loss.
ACKNOWLEDGMENTS
This work was partially supported by the German Research Foundation (DFG) within the Collaborative Research Center “On-The-Fly Computing” (CRC 901 project no. 160364472). Moreover, the authors gratefully acknowledge the funding of this project by computing time provided by the Paderborn Center for Parallel Computing (PC2). | 1. What is the main contribution of the paper regarding neural collapse?
2. How does the proposed refinement of the layer peeling model align with the properties of commonly used DNNs?
3. Can the authors clarify the definition of NC2 and its relation to the original definition of NC properties?
4. Could the authors provide more motivation behind the dilation quantity NC1 and its intuition regarding NC test data?
5. How does the paper's study of memorization relate to previous work by Feldman and Zhang?
6. Is there a possibility of reconsidering the definition of the corrupted dataset to better reflect memorization? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
This paper studies the phenomenon of neural collapse (NC), where representations of multi-class examples tend to collapse to the mean representation in a structured fashion. The authors argue that the commonly used layer peeling model for NC is overly simplistic; in particular the assumption of "infinite expressivity" does not hold well with networks in practice. They argue instead for a refinement of the layer peeling model that takes the signs of the features into account, and limits the expressiveness of the model to represent transformed inputs. They use this model to study the interplay of memorization (which they define as the deviation from the expected NC structure of data with injected label noise) and dilation (which they define as deviation of the collapsed within-class mean structure that defines neural collapse) on the generalization ability of networks trained on both cross entropy and label smoothing.
Strengths And Weaknesses
Strengths
The authors are attempting to make the study of neural collapse applicable to more real-world models, by refining the layer-peeling model to align with the properties of many commonly used DNNs (e.g non-negativity of features). This is important work needed to understand the training dynamics of modern networks.
Though this paper unavoidably introduces a lot of notation, the authors do a good job in guiding readers thorugh its deployment in service of their arguments.
The authors also explain the properties of NC very clearly, and do a good job of succinctly summarizing the development of tools for the study of neural network optimization. Though I would add I think the study on memorization by Feldman and Zhang (NeurIPS 2020) merits mentioning here, as they tackle long tailed natural data distributions (which combined with the rigidity implied by NC means that memorization would be necessary to achieve low training error).
Weaknesses
In section 1, the given definition for NC2 has a typo. I believe it should be that the inner product between any different orthonormal class means (centered at
h
) approaches
−
1
N
−
1
. As written, it suggests that
<
x
,
x
>→
−
1
N
−
1
In section 3 just after the definition of $\mathcal{P}{\alpha}
,
i
t
l
o
o
k
s
a
s
i
f
\mathbf{y}{n}^{(\alpha)} is defined by label smoothing, but it might be good to remind readers here, as there is a lot of notation introduced in this section.
Could the authors clarify element (iiii) of definition 3.1? Definition 3.1 by inspection seems to agree with the original definition of NC properties, with the exception of (iii). Perhaps to aid the reader here, each component of Definition 3.1 could be annotated with the NC property that it entails? E.g component (i) seems to entail NC1
The first sentence in the last paragraph of section 3 is awkward (and contains a repitition "in in"). In the sentence, do the authors mean that def’n 3.1 (that includes condition (iii)) allows for Theorem 3.1 in Trier & Bruna 2022 to be established? Could the authors please clarify. I read this as suggesting "Definition 3.1 and the orthogonal frame result in Theorem 3.1 in …"
The definition of the dilation quantity
NC
1
could use more motivation. Why are we examining the trace of this product? What intuition does this give us about NC test data? Zhu et al (which the authors cite as providing the definition) explain that
NC
1
is intended to measure the within-class variability collapse. I presume that this is because the trace of
Σ
W
will vanish as NC takes hold, while
Σ
B
will approach a constant. Perhaps a footnote could help guide the reader in the absence of Zhu et al’s explanation?
Clarity, Quality, Novelty And Reproducibility
The paper is for the most part written quite clearly. It requires no small effort to make work on the structure of neural network optimization accessible to a wider ML audience. Bravo :)
On the novelty of their work, though Zhou et al (Are All Losses Created Equal: A Neural Collapse Perspective, to appear in NeurIPS 2022) also examine Neural Collapse through the lens of several losses including label smoothing, this is close enough to co-publishing that I'm willing to give the authors credit here. In addition, this work takes a different basis for interrogating the effect on how label noise that perturbs NC structure affects generalization, which to my mind is a new (and worthy) line of inquiry.
I have an extended note about the authors' definition of memorization (defined as equation 3 in section 4.1). This concept of memorization is not unrelated to the concept by Feldman and Zhang. Here, you are creating elements of the long tail of an anti-causal representation by adding label noise. Your approach seems equivalent to drawing a label, then drawing observable features for this label from among the distributions of the data from differing labels, thus creating extremely improbable observations in the tail of the conditional distribution of observable features given the label.
I would ask the authors to reconsider how they define the corrupted dataset, and ask themselves if this measure defined in (3) really measures memorization. I can see an argument where in fact a low value for mem would mean that the network has memorized that the label corrupted instances should be mapped to
h
n
∗
, since this is the only way to achieve low training error.
A third consideration is that if the authors desire is to measure the effect of label corruption on the displacement of the data point from the class mean NC structure, it might be more informative to measure the relative distances between $\mathbf{h}{n}^{*}
a
n
d
\mathbf{h}{orig}^{*}$, the mean of the true label instance. |
ICLR | Title
Memorization-Dilation: Modeling Neural Collapse Under Noise
Abstract
The notion of neural collapse refers to several emergent phenomena that have been empirically observed across various canonical classification problems. During the terminal phase of training a deep neural network, the feature embedding of all examples of the same class tend to collapse to a single representation, and the features of different classes tend to separate as much as possible. Neural collapse is often studied through a simplified model, called the layer-peeled model, in which the network is assumed to have “infinite expressivity” and can map each data point to any arbitrary representation. In this work we study a more realistic variant of the layer-peeled model, which takes the positivity of the features into account. Furthermore, we extend this model to also incorporate the limited expressivity of the network. Empirical evidence suggests that the memorization of noisy data points leads to a degradation (dilation) of the neural collapse. Using a model of the memorization-dilation (M-D) phenomenon, we show one mechanism by which different losses lead to different performances of the trained network on noisy data. Our proofs reveal why label smoothing, a modification of cross-entropy empirically observed to produce a regularization effect, leads to improved generalization in classification tasks.
N/A
The notion of neural collapse refers to several emergent phenomena that have been empirically observed across various canonical classification problems. During the terminal phase of training a deep neural network, the feature embedding of all examples of the same class tend to collapse to a single representation, and the features of different classes tend to separate as much as possible. Neural collapse is often studied through a simplified model, called the layer-peeled model, in which the network is assumed to have “infinite expressivity” and can map each data point to any arbitrary representation. In this work we study a more realistic variant of the layer-peeled model, which takes the positivity of the features into account. Furthermore, we extend this model to also incorporate the limited expressivity of the network. Empirical evidence suggests that the memorization of noisy data points leads to a degradation (dilation) of the neural collapse. Using a model of the memorization-dilation (M-D) phenomenon, we show one mechanism by which different losses lead to different performances of the trained network on noisy data. Our proofs reveal why label smoothing, a modification of cross-entropy empirically observed to produce a regularization effect, leads to improved generalization in classification tasks.
1 INTRODUCTION
The empirical success of deep neural networks has accelerated the introduction of new learning algorithms and triggered new applications, with a pace that makes it hard to keep up with profound theoretical foundations and insightful explanations. As one of the few yet particularly appealing theoretical characterizations of overparameterized models trained for canonical classification tasks, Neural Collapse (NC) provides a mathematically elegant formalization of learned feature representations Papyan et al. (2020). To explain NC, consider the following setting. Suppose we are given a balanced dataset D ={ (x (k) n , yn) } k∈[K],n∈[N ] ⊂ X × Y in the instance space X = Rd and label space Y = [N ] := {1, . . . , N}, i.e. each class n ∈ [N ] has exactly K samples x(1)n , . . . ,x(K)n . We consider network architectures commonly used in classification tasks that are composed of a feature engineering part g : X → RM (which maps an input signal x ∈ X to its feature representation g(x) ∈ RM ) and a linear classifier W (·) + b given by a weight matrix W ∈ RN×M as well as a bias vector b ∈ RN . Let wn denote the row vector of W associated with class n ∈ [N ]. During training, both classifier components are simultaneously optimized by minimizing the cross-entropy loss.
*These authors contributed equally to this work.
Denoting the feature representations g(x(k)n ) of the sample x (k) n by h (k) n , the class means and the global mean of the features by
hn := 1
K K∑ i=1 h(k)n , h := 1 N N∑ n=1 hn,
NC consists of the following interconnected phenomena (where the limits take place as training progresses):
(NC1) Variability collapse. For each class n ∈ [N ], we have 1K ∑K k=1 ∥∥∥h(k)n − hn∥∥∥2 → 0 . (NC2) Convergence to simplex equiangular tight frame (ETF) structure. For any m,n ∈ [N ]
with m ̸= n, we have
∥hn − h∥2 − ∥hm − h∥2 → 0, and〈 hn − h
∥hn − h∥2 , hm − h ∥hm − h∥2
〉 → − 1
N − 1 .
(NC3) Convergence to self-duality. For any n ∈ [N ], it holds
hn − h ∥hn − h∥2 − wn ∥wn∥2 → 0 .
(NC4) Simplification to nearest class center behavior. For any feature representation u ∈ RM , it holds
argmax n∈[N ] ⟨wn,u⟩+ bn → argmin n∈[N ] ∥u− hn∥2 .
In this paper, we consider a well known simplified model, in which the features h(k)n are not parameterized by the feature engineering network g but are rather free variables. This model is often referred to as layer-peeled model or unconstrained features model, see e.g. Lu & Steinerberger (2020); Fang et al. (2021); Zhu et al. (2021). However, as opposed to those contributions, in which the features h(k)n can take any value in RM , we consider here the case h(k)n ≥ 0 (understood componentwise). This is motivated by the fact that features are typically the outcome of some non-negative activation function, like the Rectified Linear Unit (ReLU) or sigmoid. Moreover, by incorporating the limited expressivity of the network to the layer-peeled model, we propose a new model, called memorization-dilation (MD). Given such model assumptions, we formally prove advantageous effects of the so-called label smoothing (LS) technique Szegedy et al. (2015) (training with a modification of cross-entropy (CE) loss), in terms of generalization performance. This is further confirmed empirically.
2 RELATED WORK
Studying the nature of neural network optimization is challenging. In the past, a plethora of theoretical models has been proposed to do so Sun (2020). These range from analyzing simple linear Kunin et al. (2019); Zhu et al. (2020); Laurent & von Brecht (2018) to non-linear deep neural networks Saxe et al. (2014); Yun et al. (2018). As one prominent framework among others, Neural Tangent Kernels Jacot et al. (2018); Roberts et al. (2021), where neural networks are considered as linear models on top of randomized features, have been broadly leveraged for studying deep neural networks and their learning properties.
Many of the theoretical properties of deep neural networks in the regime of overparameterization are still unexplained. Nevertheless, certain peculiarities have emerged recently. Among those, socalled “benign overfitting” Bartlett et al. (2019); Li et al. (2021), where deep models are capable of perfectly fitting potentially noisy data by retaining accurate predictions, has recently attracted attention. Memorization has been identified as one significant factor contributing to this effect Arpit et al. (2017); Sanyal et al. (2021), which also relates to our studies. Not less interesting, the learning risk of highly-overparameterized models shows a double-descent behavior when varying the model
complexity Nakkiran et al. (2020) as yet another phenomenon. Lastly, the concept of NC Papyan et al. (2020) has recently shed light on symmetries in learned representations of overparameterized models.
After laying the foundation of a rigorous mathematical characterization of the NC phenomenon by Papyan et al. (2020), several follow-up works have broadened the picture. As the former proceeds from studying CE loss, the collapsing behavior has been investigated for alternative loss functions. For instance, squared losses have shown similar collapsing characteristics Poggio & Liao (2020; 2021), and have paved the way for more opportunities in its mathematical analysis, e.g., by an NC-interpretable decomposition Han et al. (2021). More recently, Kornblith et al. (2021) provide an exhaustive overview over several commonly used loss functions for training deep neural networks regarding their feature collapses.
Besides varying the loss function, different theoretical models have been proposed to analyze NC. Most prominently, unconstrained feature models have been considered, which characterize the penultimate layer activations as free optimization variables Mixon et al. (2020); Lu & Steinerberger (2020); E & Wojtowytsch (2021). This stems from the assumption that highly overparameterized models can approximate any patterns in the feature space. While unconstrained features models typically only look at the last feature encoder layer, layer-peeling allows for “white-boxing” further layers before the last one for a more comprehensive theoretical analysis Fang et al. (2021). Indeed, this approach has been applied in Tirer & Bruna (2022), which namely extends the unconstrained features model by one layer as well as the ReLU nonlinearity. On the other hand, Zhu et al. (2021), Ji et al. (2021) and Zhou et al. (2022a) extend the unconstrained features model analysis by studying the landscape of the loss function therein and the related training dynamics. Beyond unconstrained features models, Ergen & Pilanci (2021) introduce a convex analytical framework to characterize the encoder layers for a more profound understanding of the NC phenomenon. Referring to the implications of NC on our understanding of neural networks, Hui et al. (2022) and Galanti et al. (2021) discuss the impact of NC on test data in the sense of generalization and transfer learning. Finally, Kothapalli et al. (2022) provides a multifaceted survey of recent works related to NC.
3 LAYER-PEELED MODEL WITH POSITIVE FEATURES
As a prerequisite to the MD model, in this section we introduce a slightly modified version of the layer-peeled (or unconstrained features) model (see e.g. Zhu et al. (2021); Fang et al. (2021)), in which the features have to be positive. Accordingly, we will show that the global minimizers of the modified layer-peeled model correspond to an NC configuration, which differs from the global minimizers specified in other works and captures more closely the NC phenomenon in practice.
For conciseness, we denote by H the matrix formed by the features h(k)n , n ∈ [N ], k ∈ [K] as columns, and define ∥W ∥ and ∥H∥ to be the Frobenius norm of the respective matrices, i.e. ∥W ∥2 = ∑N n=1 ∥wn∥ 2 and ∥H∥2 = ∑K k=1 ∑N n=1 ∥∥∥h(k)n ∥∥∥2. We consider the regularized version of the model (instead of the norm constraint one as in e.g. Fang et al. (2021)) 1
min W ,H Lα(W ,H) := Lα(W ,H) + λW ∥W ∥2 + λH K ∥H∥2
s.t. H ≥ 0, (Pα)
where λW , λH > 0 are the penalty parameters for the weight decays. By Lα we denote empirical risk with respect to the LS loss with parameter α ∈ [0, 1), where α = 0 corresponds to the conventional CE loss. More precisely, given a value of α, the LS technique then defines the label assigned to class n ∈ [N ] as the following probability vector:
y(α)n = (1− α)en + α
n 1N ∈ [0, 1]N ,
where en ∈ RN denotes the n-th standard basis vector and 1N ∈ RN denotes the vector consisting of only ones. Let p : RM → RN be the function that assigns to each feature representation z ∈ RM
1Note that for simplicity we assume that the last layer does not have bias terms, i.e. b = 0. The result can be however easily extended to the more general case when the biases do not vanish. Namely, in presence of bias terms, the statement of Theorem 3.2 and also its proof remain unchanged.
the probability scores of the classes (as a probability vector in RN ),
pW (z) := softmax(Wz) := [ e⟨wm,z⟩∑N i=1 e ⟨wi,z⟩ ]N m=1 ∈ [0, 1]N .
Then the LS loss corresponding to a sample in class n ∈ [N ] is given by
ℓα(W , z,y (α) n ) := 〈 −y(α)n , log pW (z) 〉 := N∑ m=1 −y(α)nm log ( pW (z)m ) (1)
and the LS empirical risk Lα is defined as
Lα(W ,H) = 1
NK K∑ k=1 N∑ n=1 ℓα ( W ,h(k)n ,y (α) n ) .
We will show that in common settings, the minimizers of (Pα ) correspond to neural collapse (NC) configurations, which we formalize in Def. 3.1 below. Definition 3.1 (NC configurations). Let K,M,N ∈ N, M ≥ N . A pair (W ,H) of a weight matrix formed by rows wn ∈ RM and a feature matrix formed by columns h(k)n ∈ RM+ (with n ∈ [N ], k ∈ [K]) is said to be a NC configuration if
(i) The feature representations h(k)n within every class n ∈ [N ] are equal for all k ∈ [K], and thus equal to their class mean hn := 1K ∑K k=1 h (k) n .
(ii) The class means {hn}Nn=1 have equal norms and form an (entry-wise) non-negative orthogonal system.
(iii) Let P h⊥ be the projection upon the subspace of RM orthogonal to h = 1N ∑N
n=1 hn. Then for every n ∈ [N ], it holds wn = CPh⊥hn for some constant C independent of n.
Our main theorem in this section can be represented as follows. Theorem 3.2. Let M ≥ N , α ∈ [0, 1). Assume that N−1N α + 2 √ (N − 1)λWλH < 1. Then any global minimizer of the problem (Pα) is a NC configuration.
Note that the NC configurations defined in Definition 3.1 above differ significantly from the ones specified in other works, e.g. Fang et al. (2021); Zhu et al. (2021); Zhou et al. (2022b) or Tirer & Bruna (2022), see Appendix B.1 for more discussion.
4 THE MEMORIZATION-DILATION MODEL
4.1 EXPERIMENTAL MOTIVATION
Previous studies of the NC phenomenon mainly focus on the collapsing variability of training activations, and make rather cautious statements about its effects on generalization. For instance, Papyan et al. (2020) report slightly improved test accuracies for training beyond zero training error. Going a step further, Zhu et al. (2021) show that the NC phenomenon also happens for overparameterized models when labels are completely randomized. Here, the models seem to memorize by overfitting the data points, however, a rigorous study how label corruption affects generalization in the regime of NC is still lacking.
To fill the gap, we advocate to analyze the effects of label corruption in the training data on the (previously unseen) test instead of the training feature collapse. Eventually, tight test class clusters go hand in hand with easier separation of the instances and, thus, a smaller generalization error. Following Zhu et al. (2021), we measure the collapse of the penultimate layer activations by the NC1 metric. This metric depicts the relative magnitude of the within-class covariance ΣW with respect to the between-class covariance ΣB of the penultimate layer features and is defined as
NC1 := 1
N trace(ΣWΣ
† B), (2)
where
ΣW := 1
NK N∑ n=1 K∑ k=1 (h(k)n − hn)(h(k)n − hn)⊤ ∈ RM×M ,
ΣB := 1
N N∑ n=1 (hn − h)(hn − h)⊤ ∈ RM×M ,
and Σ†B denotes the pseudo-inverse of ΣB . Here, we adopt the notations from Section 1: h (k) n ∈ RM denotes the feature representation of k-th sample in class n, hn the class mean and h the global mean. Moreover, we distinguish NC train1 and NC test 1 to be calculated on the training and test instances, respectively. We call NC test1 dilation. Let us now turn to the notion of memorization, which is not uniquely defined in deep learning literature. Here, we define memorization in the context of the NC setting and in a global manner, different from other works, e.g. Feldman & Zhang (2020). Formally, suppose that label noise is incorporated by (independently) corrupting the instance of each class label n in the training data with probability η ∈ (0, 1), where corruption means drawing a label uniformly at random from the label space Y . We denote the set of corrupted instances by [K̃]. For a given dataset D (with label noise η), we define memorization as
mem := N∑
n=1 ∑ k∈[K̃] ∥h(k)n − h∗n∥2 , (3)
where h∗n denotes the mean of (unseen) test instances belonging to class n.
We call the original ground truth label of a sample its true label. We call the label after corruption, which may be the true label or not, the observed label. Since instances of the same true label tend to have similar input features in some sense, the network is biased to map them to similar feature representations. Instances are corrupted randomly, and hence, instances of the same true label but different observed labels do not have predictable characteristics that allow the network to separate them in a way that can be generalized. When the network nevertheless succeeds in separating such instances, we say that the network memorized the feature representations of the corrupted instances in the training set. The metric mem in (3) thus measures memorization. The above memorization also affects dilation. Indeed, the network uses the feature engineering part to embed samples of similar features (that originally came from the same class), to far apart features, that encode different labels. Such process degrades the ability of the network to embed samples consistently, and leads to dilation.
To quantify the interaction between mem and NC test1 , we analyzed the learned representations h in the penultimate layer feature space for different noise configurations. One may wonder whether one can see a systematic trend in the test collapse given the memorization, and how this evolves over different loss functions.
To this end, we trained simple multi-layer neural networks for two classes (N = 2), which we subsampled from the image classification datasets MNIST LeCun et al. (1998), FashionMNIST Xiao et al. (2017), CIFAR-10 Krizhevsky & Hinton (2009) and SVHN Netzer et al. (2011). The labels are corrupted with noise degrees η ∈ [0.025, 0.4]. The network consists of 9 hidden layers with 2048 neurons each, thus, it represents a vastly overparameterized model. The feature dimension M is set to the number of classes N . We trained these networks using the CE and LS loss with a smoothing factor α = 0.1, as well as the mean-squared error (MSE). Moreover, we consider label relaxation (LR) Lienen & Hüllermeier (2021) as a generalization to LS with a relaxation degree α = 0.1. The networks were trained until convergence in 200 epochs (where the last 50 epochs did not make any significant changes) using SGD with an initial learning rate of 0.1 multiplied by 0.1 each 40 epochs and a small weight decay of 0.001. Moreover, we considered ReLU as activation function throughout the network, as well as batch normalization in each hidden layer. A linear softmax classifier is composed on the encoder. We conducted each experiment ten times with different seeds.
The results for the above experimental setting are shown in Fig. 1, in which one can observe the trends of √ NC test1 per memorization for various configurations. As can be seen, the figure shows an
approximately linear correspondence between √
NC test1 and mem for the CE derivatives (CE and LS) on all datasets when mem is not large.
0.0 0.5 1.0 1.5 2.0 0.1
0.2
0.3
0.4
0.5
0.6
mnist
CE LS LR MSE
0.0 0.5 1.0 1.5 2.0
0.2
0.3
0.4
0.5
fashionmnist
CE LS LR MSE
0.0 0.5 1.0 1.5 0.45
0.50
0.55
0.60
0.65
0.70
cifar10
CE LS LR MSE
0.0 0.5 1.0 1.5 2.0
0.3
0.4
0.5
0.6
svhn
CE LS LR MSE
0.05
0.10
0.15
0.20
0.25
0.30
0.35
No ise
D eg
re e
Test Collapse per Memorization (N = 2)
(NK) 1 n [N]k [K] |h(k)n h *n |2 (Memorization)
te st 1 (D
ila tio
n)
0.0 0.5 1.0 1.5 2.0
85
90
95
100 mnist
CE LS LR MSE
0.0 0.5 1.0 1.5 2.0 88
90
92
94
96
98
fashionmnist
CE LS LR MSE
0.0 0.5 1.0 1.5
84
86
88
90
cifar10
CE LS LR MSE
0.0 0.5 1.0 1.5 2.0
86
88
90
92
94
96 svhn
CE LS LR MSE
0.05
0.10
0.15
0.20
0.25
0.30
0.35
No ise
D eg
re e
Test Accuracy per Memorization (N = 2)
(NK) 1 n [N]k [K] |h(k)n h *n |2 (Memorization)
Te st
A cc
ur ac
y
Figure 1: Feature collapse of the test instances in terms of
√
NC test1 per memorization (top row) and the resulting
test accuracies (bottom row) averaged over ten random seeds. Comparing the markers of the same color, it can be observed that LS consistently performs better than CE across all datasets, with very few exceptions (the very low noise degrees in cifar10).
Moreover, as CE and LS share the same slope, these results suggest that the degradation of the test collapse (aka dilation) is a function of memorization and the network expressitivity, and not of the choice of the loss. The loss only affects how the noise translates to memorization, but not how memorization translates to dilation. Even though the same amount of noise is mapped to different memorization values in CE and LS, the memorization-dilation curve is nevertheless shared between CE and LS. Hence, since LS leads the network to memorize less, it results in improved performance (cf. Fig. 1). We can further see that MSE and LR show a different memorization-dilation correspondence, which means that these losses affect the inductive bias in a different way than CE and LS.
We repeated the experiments for different values of the feature dimension M and show the example results in Fig. 2. Here, one can see the similar trends of dilation per memorization as before. In the appendix, we provide additional results showing the behavior in the multi-class case N > 2 with different models for label noise. The results support our MD model, and show that the memorizationdilation curve is roughly independent of the noise model for low-to-mid noise levels.
4.2 THE MEMORIZATION-DILATION MODEL
Motivated by the observations of the previous experiments, we propose the so-called memorizationdilation (MD) model, which extends the unconstrained feature model by incorporating the interaction between memorization and dilation as a model assumption. By this, we explicitly capture the limited expressivity of the network, thereby modeling the inductive bias of the underlying model.
This model shall provide a basis to mathematically characterize the difference in the learning behavior of CE and LS. More specifically, we would like to know why LS shows improved generalization performance over conventional CE, as was observed in past works Müller et al. (2019). The main idea can be explained as follows. We first note that dilation is directly linked to generalization (see also Kornblith et al. (2021)), since the more concentrated the feature representations of each class are, the easier it is to separate the different classes with a linear classifier without having outliers crossing the decision boundary. The MD model asserts that dilation is a linear function of memorization. Hence, the only way that LS can lead to less dilation than CE, is if LS memorizes less than CE. Hence, the goal in our analysis is to show that, under the MD model, LS indeed leads to less memorization than CE. Note that this description is observed empirically in the experiments of Section 4.1.
Next we define the MD model in the binary classification setting. Definition 4.1. We call the following minimization problem MD. Minimize the MD risk
Rλ,η,α(U , r) := Fλ,α(W ,H, r) + ηGλ,α(W ,U , r), with respect to the noisy feature embedding U = [u1,u2] ∈ R2×M+ and the standard deviation r ≥ 0, under the constraints
η ∥h1 − u2∥ ≤ CMDr
∥h1 − h2∥ (4)
η ∥h2 − u1∥ ≤ CMDr
∥h1 − h2∥ . (5)
Here,
• H ∈ R2×M+ and W ∈ RM×2 form an NC configuration (see Definition 3.1).
• CMD > 0 is called the memorization-dilation slope, 0 ≤ α < 1 is called the LS parameter, η > 0 the noise level, and λ > 0 the regularization parameter.
• Fλ,α is the component in the (regularized) risk that is associated with the correctly labeled samples,
Fλ,α(W ,H, r) := ∫ ( ℓα ( W ,h1 + v,y (α) 1 ) + λ ∥h1 + v∥2 ) dµ1r(v)
+ ∫ ( ℓα ( W ,h2 + v,y (α) 2 ) + λ ∥h2 + v∥2 ) dµ2r(v)
where µ1r and µ 2 r are some probability distributions with mean 0 and standard deviation r, and lα is the LS loss defined in (1).
• Gλ,α is the component in the (regularized) risk that is associated with the corrupted samples, defined as
Gλ,α(W ,U , r) = ℓα ( W ,u1,y (α) 1 ) + ℓα ( W ,u2,y (α) 2 ) + λ ∥u1∥2 + λ ∥u2∥2 .
The MD model can be interpreted as follows. First we consider the feature representations of the correctly labeled samples in each class as samples from a distribution (namely µ1,2r in Def. 4.1) with standard deviation r, a parameter that measures the dilation of the class cluster. In a natural way, the corresponding risk Fλ,α involves the loss average over all samples, i.e. the loss integral over the distribution. For simplicity, we assume that the class centers h1,h2 as well as the weight matrix W are fixed as described by the NC configuration. This is a reasonable simplification as it has been always observed in the experiments.
On the other hand, the feature representations of corrupted samples are u1 and u2.2 The amount of memorization in the first class is defined to be η∥h2 − u1∥, since the more noise η there is, the more
2Certainly one can, instead of two single points u1 and u2, two distributions centered around u1 and u2, similarly as before for uncorrupted samples. However, it is quite straightforward to see that the minimization of the MD risk over the dilation of these two distributions is independent of other variables (not like r), and thus the minimum should be attained in the case of collapsing into two single points. Thus, for convenience we assume directly here that Gλ,α involves only two single points.
examples we need to memorize. The amount of memorization in the second class is defined the same way. The (normalized) dilation is defined to be r∥h1−h2∥ , which models a similar quantity to (2).
h
(k) 2 are test images correctly labeled as 1, with centroid h2. The centroid of the test images with correct label 0 is h1. The centroid of training images which were originally labeled as 1 but are mislabeled as 0 is u1. The memorization of u1 moves it close to h1, and causes dilation of the instances h
(k) 2 .
The constraints (4) and (5) tell us that in order to map noisy samples u1 away from h2, we have to pay with dilation r. The larger r is, the further away we can map u1 from h2. The correspondence between memorization and dilation is linear with slope CMD by assumption. There are two main forces in the optimization problem: u1 would like to be as close as possible to its optimal position h1, and similarly u2 likes to be close to h2. In view of the constraints (4) and (5), to achieve this, r has to be increased to rmax := η∥h1−h2∥2
CMD . On the other hand, the optimal r for the term Fλ,α is r = 0, namely, the layer-peeled NC configuration. An optimal solution hence balances between memorization and dilation. See Fig. 3 for a visualization of the MD model.
Our goal in this section is to compare the optimal value r in case of LS and CE losses. We will distinguish between these two cases by setting the value of α in the MD model to 0 for CE and to some α0 > 0 for LS. This will result in two different scales of the feature embeddings H , denoted by HCE and HLS for CE and LS loss respectively, with the ratio
γ := ∥∥HCE∥∥ / ∥∥HLS∥∥ > 1, (6)
which holds under the reasonable assumption that the LS technique is sufficiently effective, or more precisely α0 > 2 √ λWλH .
The main result in this section will be Theorem 4.3, which states informally that in the low noise regime, the optimal dilation in case of LS loss is smaller than that in case of CE loss. Before presenting this theorem, we will first establish several assumptions on the distributions µ1,2r and the noise η in Assumption 4.2. Basically we allow a rich class of distributions and only require certain symmetry and bounded supports in terms of r, as well as require η to be small in terms of the ratio γ.
Assumption 4.2.
1. Let α0 > 0. We assume that the solution of
min W ,H ℓα
( W ,h1,y (α) 1 ) + ℓα ( W ,h2,y (α) 2 ) + λW ∥W ∥2 + λH ∥H∥2
s.t. H ≥ 0 .
is given by (W ,H) = (WCE ,HCE) for α = 0 and (W ,H) = (WLS ,HLS) for α = α0.
2. Assume that the distributions µ1r and µ 2 r are centered, in the sense that∫ ⟨w2 −w1,v⟩ dµ1r(v) = ∫
⟨w1 −w2,v⟩ dµ2r(v) = 0 ,∫ ⟨h1,v⟩ dµ1r(v) = ∫ ⟨h2,v⟩ dµ2r(v) = 0 .
Furthermore, we assume that there exists a constant A > 0 such that ∥v∥ ≤ Ar for any vector v that lies in the support of µ1r or in the support of µ 2 r .
3. Assume that the noise level η and the LS parameter α0 satisfy the following. We suppose α0 > 4 √ λWλH , which guarantees γ := ∥∥HCE∥∥ / ∥∥HLS∥∥ > 1. We moreover suppose that η is sufficiently small to guarantee η1/2 < C̃(1− 1γ ) where C̃ :=
CMD√ 2∥hCE1 −hCE2 ∥ .
Now our main result in this section can be formally stated as below.
Theorem 4.3. Suppose that Assumption 4.2 holds true for M ≥ N = 2 and λ := λH . Let rCE∗ and rLS∗ be the optimal dilations, i.e. the optimum r in the MD problem, corresponding to the CE and LS loss (accordingly α = 0 and α = α0), respectively. Then it holds that
rCE∗∥∥hCE1 − hCE2 ∥∥ > r LS ∗∥∥hLS1 − hLS2 ∥∥ .
Theorem 4.3 reveals a mechanism by which LS achieves better generalization than CE. It is proven that LS memorizes and dilates less than CE, which is associated with better generalization. Note that in practice, the data often have noise in the sense that not all examples are perfectly labeled. More importantly, examples from different classes may share many similarities, a situation that is also covered by the MD model: the feature representations of samples from those classes are biased toward each other. In this case, LS also leads to decreased dilation which corresponds to better class separation and higher performance Kornblith et al. (2021).
Interestingly, the concurrent work Zhou et al. (2022b) has shown that in the noiseless setting CE and LS lead to largely identical test accuracy, which seems to contradict the statement that LS performs better claimed by our work as well as many others, e.g. Kornblith et al. (2021); Müller et al. (2019). However, note that Zhou et al. (2022b) requires the network to be sufficiently large so that it has enough expressive power to fit the underlying mapping from input data to targets, as well as to be trained until convergence. While the latter is easy to obtain, it is difficult even to check if the first requirement holds. The difference between the two results is hence possibly caused by the effect of noise and by the network expressivity: while we aim to model the limited expressivity by the MD relation, Zhou et al. (2022b) focuses on networks with approximately infinite expressivity.
The MD model combines a statistical term Fλ,α, that describes the risk over the distribution of feature embeddings of samples with clean labels, and an empirical term ηGλ,α that describes the risk over training samples with noisy labels. One point of view that can motivate such a hybrid statistical-empirical definition is the assumption that the network only memorizes samples of noisy labels, but not samples of clean labels. Such a memorization degrades (dilates) both the collapse of the training and test samples, possibly with different memorization-dilation slopes. However, memorization is not limited to corrupted labels, but can also apply to samples of clean labels Feldman & Zhang (2020), by which the learner can partially negate the dilation of the training features (but not test features). The fact that our model does not take the memorization of clean samples into account is one of its limitations. We believe that future work should focus on modeling the total memorization of all examples. Nevertheless, we believe that our current MD model has merit, since 1) noisy labels are memorized more than clean labels, and especially in the low noise regime the assumption of observing memorization merely for corrupted labels appears reasonable, and 2) our approach and proof techniques can be the basis of more elaborate future MD models.
5 CONCLUSION
In this paper, we first characterized the global minimizers of the Layer-Peeled Model (or the Unconstrained Features Model) with the positivity condition on the feature representations. Our characterization shows some distinctions from the results that haven been obtained in recent works for the same model without feature positivity. Besides the conventional cross-entropy (CE) loss, we studied the model in case of the label smoothing (LS) loss, showing that NC also occurs when applying this technique.
Then we extended the model to the so-called Memorization-Dilation (MD) Model by incorporating the limited expressivity of the network. Using the MD model, which is supported by our experimental observations, we show that when trained with the LS loss, the network memorizes less than when trained by the CE loss. This poses one explanation to the improved generalization performance of the LS technique over the conventional CE loss.
Our model has limitations, however, namely that it is limited to the case of two classes. Motivated by promising results on the applicability of our model to the multi-class setting, we believe that future work should focus on extending the MD model in this respect. With such extensions, memorizationdilation analysis has the potential to underlie a systematic comparison of the generalization capabilities of different losses, such as CE, LS, and label relaxation, by analytically deriving formulas for the amount of memorization associated with each loss.
ACKNOWLEDGMENTS
This work was partially supported by the German Research Foundation (DFG) within the Collaborative Research Center “On-The-Fly Computing” (CRC 901 project no. 160364472). Moreover, the authors gratefully acknowledge the funding of this project by computing time provided by the Paderborn Center for Parallel Computing (PC2). | 1. What is the focus of the paper regarding neural collapse?
2. What are the strengths of the proposed approach, particularly in its theoretical analysis?
3. What are the weaknesses of the paper, especially in its theoretical setup and limitations?
4. Do you have any concerns or suggestions regarding the paper's methodology or conclusions?
5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
This paper theoretically studies the notion of neural collapse (NC) -- the penultimate layer representations of all the examples in a specific class collapse to a single representation, and separate from other classes. Specifically, this paper studies this notion under uniform label noises, and proposed a model called memorization-dilation (M-D), in which it shows labels smoothing leads to less memorization and better generalization.
Strengths And Weaknesses
Strength
This paper theoretically studied neural collapse (NC) with a definition that describes more precisely the empirically observed phenomena in practice comparing to other papers.
This paper formally study the NC under label noises with a memorization-dilation model, and make connection between memorization and NC.
The theoretical model is motivated from empirical observations with deep neural networks on real data.
Weakness
The theoretical setup for studying neural collapse is quite different from what a real neural network behave in practice. In particular, it assumes "infinite expressivity" and allow the representation of each example to be freely trainable parameters. The resulting formulation looks like a matrix factorization problem with a classification loss. I think the most interesting part of neural network representation learning (and collapsing) would mostly happen jointly in the layers below the final classifier layer, and how those layers share weights when jointly computing the representations for all the training examples. This is especially important for the topic of this paper, where label noise is introduced to study memorization. In this case, the lower layers are forced to build different representations for visually similar inputs in some cases when they are assigned different training labels. But this interactions would be completely missing in the model proposed here.
I appreciate that the empirical studies in this paper uses deep neural networks. However, since the theoretical models are so different, I would like to see empirical studies with a similar setup, using networks with approximately infinite expressivity. Or better, simply using a model where the representations are directly optimizable free variables, and see if the empirical observations are still similar and equally well for motivating the M-D models. One question here is how to get the representations for the test examples for computing Eq (3) in this case.
The analysis is limited to binary classification problems.
Clarity, Quality, Novelty And Reproducibility
The presentation of this paper is relatively easy to follow. I do have a nitpick request: I think a more common convention is to use n, N for example indices and k, K for class indices. This paper does the opposite and confused me a number of times during the reading. |
ICLR | Title
A Compare-Aggregate Model for Matching Text Sequences
Abstract
Many NLP tasks including machine comprehension, answer selection and text entailment require the comparison between sequences. Matching the important units between sequences is a key to solve these problems. In this paper, we present a general “compare-aggregate” framework that performs word-level matching followed by aggregation using Convolutional Neural Networks. We particularly focus on the different comparison functions we can use to match two vectors. We use four different datasets to evaluate the model. We find that some simple comparison functions based on element-wise operations can work better than standard neural network and neural tensor network.
1 INTRODUCTION
Many natural language processing problems involve matching two or more sequences to make a decision. For example, in textual entailment, one needs to determine whether a hypothesis sentence can be inferred from a premise sentence (Bowman et al., 2015). In machine comprehension, given a passage, a question needs to be matched against it in order to find the correct answer (Richardson et al., 2013; Tapaswi et al., 2016). Table 1 gives two example sequence matching problems. In the first example, a passage, a question and four candidate answers are given. We can see that to get the correct answer, we need to match the question against the passage and identify the last sentence to be the answer-bearing sentence. In the second example, given a question and a set of candidate answers, we need to find the answer that best matches the question. Because of the fundamental importance of comparing two sequences of text to judge their semantic similarity or relatedness, sequence matching has been well studied in natural language processing.
With recent advances of neural network models in natural language processing, a standard practice for sequence modeling now is to encode a sequence of text as an embedding vector using models such as RNN and CNN. To match two sequences, a straightforward approach is to encode each sequence as a vector and then to combine the two vectors to make a decision (Bowman et al., 2015; Feng et al., 2015). However, it has been found that using a single vector to encode an entire sequence is not sufficient to capture all the important information from the sequence, and therefore advanced techniques such as attention mechanisms and memory networks have been applied to sequence matching problems (Hermann et al., 2015; Hill et al., 2016; Rocktäschel et al., 2015).
A common trait of a number of these recent studies on sequence matching problems is the use of a “compare-aggregate” framework (Wang & Jiang, 2016b; He & Lin, 2016; Parikh et al., 2016). In such a framework, comparison of two sequences is not done by comparing two vectors each representing an entire sequence. Instead, these models first compare vector representations of smaller units such as words from these sequences and then aggregate these comparison results to make the final decision. For example, the match-LSTM model proposed by Wang & Jiang (2016b) for textual entailment first compares each word in the hypothesis with an attention-weighted version of the premise. The comparison results are then aggregated through an LSTM. He & Lin (2016) proposed a pairwise word interaction model that first takes each pair of words from two sequences and applies a comparison unit on the two words. It then combines the results of these word interactions using a similarity focus layer followed by a multi-layer CNN. Parikh et al. (2016) proposed a decomposable attention model for textual entailment, in which words from each sequence are compared with an
attention-weighted version of the other sequence to produce a series of comparison vectors. The comparison vectors are then aggregated and fed into a feed forward network for final classification.
Although these studies have shown the effectiveness of such a “compare-aggregate” framework for sequence matching, there are at least two limitations with these previous studies: (1) Each of the models proposed in these studies is tested on one or two tasks only, but we hypothesize that this general framework is effective on many sequence matching problems. There has not been any study that empirically verifies this. (2) More importantly, these studies did not pay much attention to the comparison function that is used to compare two small textual units. Usually a standard feedforward network is used (Hu et al., 2014; Wang & Jiang, 2016b) to combine two vectors representing two units that need to be compared, e.g., two words. However, based on the nature of these sequence matching problems, we essentially need to measure how semantically similar the two sequences are. Presumably, this property of these sequence matching problems should guide us in choosing more appropriate comparison functions. Indeed He & Lin (2016) used cosine similarity, Euclidean distance and dot product to define the comparison function, which seem to be better justifiable. But they did not systematically evaluate these similarity or distance functions or compare them with a standard feedforward network.
In this paper, we argue that the general “compare-aggregate” framework is effective for a wide range of sequence matching problems. We present a model that follows this general framework and test it on four different datasets, namely, MovieQA, InsuranceQA, WikiQA and SNLI. The first three datasets are for Question Answering, but the setups of the tasks are quite different. The last dataset is for textual entailment. More importantly, we systematically present and test six different comparison functions. We find that overall a comparison function based on element-wise subtraction and multiplication works the best on the four datasets.
The contributions of this work are twofold: (1) Using four different datasets, we show that our model following the “compare-aggregate” framework is very effective when compared with the state-ofthe-art performance on these datasets. (2) We conduct systematic evaluation of different comparison functions and show that a comparison function based on element-wise operations, which is not widely used for word-level matching, works the best across the different datasets. We believe that these findings will be useful for future research on sequence matching problems. We have also made our code available online.1
2 METHOD
In this section, we propose a general model following the “compare-aggregate” framework for matching two sequences. This general model can be applied to different tasks. We focus our discussion on six different comparison functions that can be plugged into this general “compare-aggregate” model. In particular, we hypothesize that two comparison functions based on element-wise operations, SUB and MULT, are good middle ground between highly flexible functions using standard neural network models and highly restrictive functions based on cosine similarity and/or Euclidean
1https://github.com/shuohangwang/SeqMatchSeq
distance. As we will show in the experiment section, these comparison functions based on elementwise operations can indeed perform very well on a number of sequence matching problems.
2.1 PROBLEM DEFINITION AND MODEL OVERVIEW
The general setup of the sequence matching problem we consider is the following. We assume there are two sequences to be matched. We use two matrices Q ∈ Rd×Q and A ∈ Rd×A to represent the word embeddings of the two sequences, where Q and A are the lengths of the two sequences, respectively, and d is the dimensionality of the word embeddings. In other words, each column vector of Q or A is an embedding vector representing a single word. Given a pair of Q and A, the goal is to predict a label y. For example, in textual entailment, Q may represent a premise and A a hypothesis, and y indicates whether Q entails A or contradicts A. In question answering, Q may be a question and A a candidate answer, and y indicates whether A is the correct answer to Q.
We treat the problem as a supervised learning task. We assume that a set of training examples in the form of (Q,A, y) is given and we aim to learn a model that maps any pair of (Q,A) to a y.
An overview of our model is shown in Figure 1. The model can be divided into the following four layers:
1. Preprocessing: We use a preprocessing layer (not shown in the figure) to process Q and A to obtain two new matrices Q ∈ Rl×Q and A ∈ Rl×A. The purpose here is to use some gate values to control the importance of different words in making the predictions on the sequence pair. For example, qi ∈ Rl, which is the ith column vector of Q, encodes the ith word in Q.
2. Attention: We apply a standard attention mechanism on Q and A to obtain attention weights over the column vectors in Q for each column vector in A. With these attention weights, for each column vector aj in A, we obtain a corresponding vector hj , which is an attention-weighted sum of the column vectors of Q.
3. Comparison: We use a comparison function f to combine each pair of aj and hj into a vector tj .
4. Aggregation: We use a CNN layer to aggregate the sequence of vectors tj for the final classification.
Although this model follows more or less the same framework as the model proposed by Parikh et al. (2016), our work has some notable differences. First, we will pay much attention to the comparison function f and compare a number of options, including some uncommon ones based on elementwise operations. Second, we apply our model to four different datasets representing four different tasks to evaluate its general effectiveness for sequence matching problems. There are also some other differences from the work by Parikh et al. (2016). For example, we use a CNN layer instead of summation and concatenation for aggregation. Our attention mechanism is one-directional instead of two-directional.
In the rest of this section we will present the model in detail. We will focus mostly on the comparison functions we consider.
2.2 PREPROCESSING AND ATTENTION
Inspired by the use of gates in LSTM and GRU, we preprocess Q and A with the following formulas:
Q = σ(WiQ+ bi ⊗ eQ) tanh(WuQ+ bu ⊗ eQ), A = σ(WiA+ bi ⊗ eA) tanh(WuA+ bu ⊗ eA), (1)
where is element-wise multiplication, and Wi,Wu ∈ Rl×d and bi,bu ∈ Rl are parameters to be learned. The outer product (· ⊗ eX) produces a matrix or row vector by repeating the vector or scalar on the left for X times. Here σ(WiQ + bi ⊗ eQ) and σ(WiA + bi ⊗ eA) act as gate values to control the degree to which the original values of Q and A are preserved in Q and A. For example, for stop words, their gate values would likely be low for tasks where stop words make little difference to the final predictions.
In this preprocessing step, the word order does not matter. Although a better way would be to use RNN such as LSTM and GRU to chain up the words such that we can capture some contextual information, this could be computationally expensive for long sequences. In our experiments, we only incorporated LSTM into the formulas above for the SNLI task.
The general attention (Luong et al., 2015) layer is built on top of the resulting Q and A as follows: G = softmax ( (WgQ+ bg ⊗ eQ)TA ) ,
H = QG, (2)
where Wg ∈ Rl×l and bg ∈ Rl are parameters to be learned, G ∈ RQ×A is the attention weight matrix, and H ∈ Rl×A are the attention-weighted vectors. Specifically, hj , which is the jth column vector of H, is a weighted sum of the column vectors of Q and represents the part of Q that best matches the jth word in A. Next we will combine hj and aj using a comparison function.
2.3 COMPARISON
The goal of the comparison layer is to match each aj , which represents the jth word and its context in A, with hj , which represents a weighted version of Q that best matches aj . Let f denote a comparison function that transforms aj and hj into a vector tj to represent the comparison result.
A natural choice of f is a standard neural network layer that consists of a linear transformation followed by a non-linear activation function. For example, we can consider the following choice:
NEURALNET (NN): tj = f(aj ,hj) = ReLU(W [ aj hj ] + b), (3)
where matrix W ∈ Rl×2l and vector b ∈ Rl are parameters to be learned. Alternatively, another natural choice is a neural tensor network (Socher et al., 2013) as follows:
NEURALTENSORNET (NTN): tj = f(aj ,hj) = ReLU(aTjT [1...l]hj + b), (4)
where tensor T[1...l] ∈ Rl×l×l and vector b ∈ Rl are parameters to be learned.
However, we note that for many sequence matching problems, we intend to measure the semantic similarity or relatedness of the two sequences. So at the word level, we also intend to check how similar or related aj is to hj . For this reason, a more natural choice used in some previous work is Euclidean distance or cosine similarity between aj and hj . We therefore consider the following definition of f :
EUCLIDEAN+COSINE (EUCCOS): tj = f(aj ,hj) = [ ‖aj − hj‖2 cos(aj ,hj) ] . (5)
Note that with EUCCOS, the resulting vector tj is only a 2-dimensional vector. Although EUCCOS is a well-justified comparison function, we suspect that it may lose some useful information from the original vectors aj and hj . On the other hand, NN and NTN are too general and thus do not capture the intuition that we care mostly about the similarity between aj and hj .
To use something that is a good compromise between the two extreme cases, we consider the following two new comparison functions, which operate on the two vectors in an element-wise manner. These functions have been used previously by Mou et al. (2016).
SUBTRACTION (SUB): tj = f(aj ,hj) = (aj − hj) (aj − hj), (6) MULTIPLICATION (MULT): tj = f(aj ,hj) = aj hj . (7)
Note that the operator is element-wise multiplication. For both comparison functions, the resulting vector tj has the same dimensionality as aj and hj .
We can see that SUB is closely related to Euclidean distance in that Euclidean distance is the sum of all the entries of the vector tj produced by SUB. But by not summing up these entries, SUB preserves some information about the different dimensions of the original two vectors. Similarly, MULT is closely related to cosine similarity but preserves some information about the original two vectors.
Finally, we consider combining SUB and MULT followed by an NN layer as follows: SUBMULT+NN: tj = f(aj ,hj) = ReLU(W [ (aj − hj) (aj − hj)
aj hj
] + b). (8)
In summary, we consider six different comparison functions: NN, NTN, EUCCOS, SUB, MULT and SUBMULT+NN. Among these functions, the last three (SUB, MULT and SUBMULT+NN) have not been widely used in previous work for word-level matching.
2.4 AGGREGATION
After we apply the comparison function to each pair of aj and hj to obtain a series of vectors tj , finally we aggregate these vectors using a one-layer CNN (Kim, 2014):
r = CNN([t1, . . . , tA]). (9)
r ∈ Rnl is then used for the final classification, where n is the number of windows in CNN.
3 EXPERIMENTS
In this section, we evaluate our model on four different datasets representing different tasks. The first three datasets are question answering tasks while the last one is on textual entailment. The statistics of the four datasets are shown in Table 2. We will fist introduce the task settings and the way we customize the “compare-aggregate” structure to each task. Then we will show the baselines for the different datasets. Finally, we discuss the experiment results shown in Table 3 and the ablation study shown in Table 4.
3.1 TASK-SPECIFIC MODEL STRUCTURES
In all these tasks, we use matrix Q ∈ Rd×Q to represent the question or premise and matrix Ak ∈ Rd×Ak (k ∈ [1,K]) to represent the kth answer or the hypothesis. For the machine comprehension task MovieQA (Tapaswi et al., 2016), there is also a matrix P ∈ Rd×P that represents the plot of a movie. Here Q is the length of the question or premise, Ak the length of the kth answer, and P the length of the plot.
For the SNLI (Bowman et al., 2015) dataset, the task is text entailment, which identifies the relationship (entailment, contradiction or neutral) between a premise sentence and a hypothesis sentence. Here K = 1, and there are exactly two sequences to match. The actual model structure is what we have described before.
For the InsuranceQA (Feng et al., 2015) dataset, the task is an answer selection task which needs to select the correct answer for a question from a candidate pool. For the WikiQA (Yang et al., 2015) datasets, we need to rank the candidate answers according to a question. For both tasks,
there are K candidate answers for each question. Let us use rk to represent the resulting vector produced by Eqn. 9 for the kth answer. In order to select one of the K answers, we first define R = [r1, r2, . . . , rK ]. We then compute the probability of the kth answer to be the correct one as follows:
p(k|R) = softmax(wT tanh(WsR+ bs ⊗ eK) + b⊗ eK), (10) where Ws ∈ Rl×nl, w ∈ Rl, bs ∈ Rl, b ∈ R are parameters to be learned. For the machine comprehension task MovieQA, each question is related to Plot Synopses written by fans after watching the movie and each question has five candidate answers. So for each candidate answer there are three sequences to be matched: the plot P, the question Q and the answer Ak. For each k, we first match Q and P and refer to the matching result at position j as tqj , as generated by one of the comparison functions f . Similarly, we also match Ak with P and refer to the matching result at position j as tak,j . We then define
tk,j = [ tqj tak,j ] ,
and
rk = CNN([tk,1, . . . , tk,P ]).
To select an answer from the K candidate answers, again we use Eqn. 10 to compute the probabilities.
The implementation details of the modes are as follows. The word embeddings are initialized from GloVe (Pennington et al., 2014). During training, they are not updated. The word embeddings not found in GloVe are initialized with zero.
The dimensionality l of the hidden layers is set to be 150. We use ADAMAX (Kingma & Ba, 2015) with the coefficients β1 = 0.9 and β2 = 0.999 to optimize the model. We do not use L2regularization. The main parameter we tuned is the dropout on the embedding layer. For WikiQA, which is a relatively small dataset, we also tune the learning rate and the batch size. For the others, we set the batch size to be 30 and the learning rate 0.002.
3.2 BASELINES
Here, we will introduce the baselines for each dataset. We did not re-implement these models but simply took the reported performance for the purpose of comparison.
SNLI: •W-by-W Attention: The model by Rocktäschel et al. (2015), who first introduced attention mechanism into text entailment. • match-LSTM: The model by Wang & Jiang (2016b), which concatenates the matched words as the inputs of an LSTM. • LSTMN: Long short-term memorynetworks proposed by Cheng et al. (2016). • Decomp Attention: Another “compare-aggregate” model proposed by Parikh et al. (2016). • EBIM+TreeLSTM: The state-of-the-art model proposed by Chen et al. (2016) on the SNLI dataset.
InsuranceQA: • IR model: This model by Bendersky et al. (2010) learns the concept information to help rank the candidates. • CNN with GESD: This model by Feng et al. (2015) uses Euclidean distance and dot product between sequence representations built through convolutional neural networks to select the answer. • Attentive LSTM: Tan et al. (2016) used soft-attention mechanism to select the most important information from the candidates according to the representation of the questions. • IARNN-Occam: This model by Wang et al. (2016) adds regularization on the attention weights. • IARNN-Gate: This model by Wang et al. (2016) uses the representation of the question to build the GRU gates for each candidate answer.
WikiQA: • IARNN-Occam and IARNN-Gate as introduced before. • CNN-Cnt: This model by Yang et al. (2015) combines sentence representations built by a convolutional neural network with logistic regression. • ABCNN: This model is Attention-Based Convolutional Neural Network proposed by Yin et al. (2015). • CubeCNN proposed by He & Lin (2016) builds a CNN on all pairs of word similarity.
MovieQA: All the baselines we consider come from Tapaswi et al. (2016)’s work: • Cosine Word2Vec: A sliding window is used to select the answer according to the similarities computed
through Word2Vec between the sentences in plot and the question/answer. • Cosine TFIDF: This model is similar to the previous method but uses bag-of-word with tf-idf scores to compute similarity. • SSCB TFIDF: Instead of using the sliding window method, a convolutional neural network is built on the sentence level similarities.
3.3 ANALYSIS OF RESULTS
We use accuracy as the evaluation metric for the datasets MovieQA, InsuranceQA and SNLI, as there is only one correct answer or one label for each instance. For WikiQA, there may be multiple correct answers, so evaluation metrics we use are Mean Average Precision (MAP) and Mean Reciprocal Rank (MRR).
We observe the following from the results. (1) Overall, we can find that our general “compareaggregate” structure achieves the best performance on MovieQA, InsuranceQA, WikiQA datasets and very competitive performance on the SNLI dataset. Especially for the InsuranceQA dataset, with any comparison function we use, our model can outperform all the previous models. (2) The comparison method SUBMULT+NN is the best in general. (3) Some simple comparison functions can achieve better performance than the neural networks or neural tensor network comparison functions. For example, the simplest comparison function EUCCOS achieves nearly the best performance in the MovieQA dataset, and the element-wise comparison functions, which do not need parameters can achieve the best performance on the WikiQA dataset. (4) We find the preprocessing layer and the attention layer for word selection to be important in the “compare-aggregate” structure through the experiments of removing these two layers separately. We also see that for sequence matching with big difference in length, such as the MovieQA and InsuranceQA tasks, the attention layer plays a more important role. For sequence matching with smaller difference in length, such as the WikiQA and SNLI tasks, the pre-processing layer plays a more important role. (5) For the MovieQA, InsuranceQA and WikiQA tasks, our preprocessing layer is order-insensitive so that it will not take the context information into consideration during the comparison, but our model can still outperform the previous work with order-sensitive preprocessing layer. With this finding, we believe the word-by-word comparison part plays a very important role in these tasks. We will further explore the preprocessing layer in the future.
3.4 FURTHER ANALYSES
To further explain how our model works, we visualize the max values in each dimension of the convolutional layer. We use two examples shown in Table 1 from MovieQA and InsuranceQA datasets respectively. In the top of Figure 2, we can see that the plot words that also appear in either the question or the answer will draw more attention by the CNN. We hypothesize that if the nearby words in the plot can match both the words in question and the words in one answer, then this answer is more likely to be the correct one. Similarly, the bottom one of Figure 2 also shows that the CNN will focus more on the matched word representations. If the words in one answer continuously match the words in the question, this answer is more likely to be the correct one.
4 RELATED WORK
We review related work in three types of general structures for matching sequences.
Siamense network: These kinds of models use the same structure, such as RNN or CNN, to build the representations for the sequences separately and then use them for classification. Then cosine similarity (Feng et al., 2015; Yang et al., 2015), element-wise operation (Tai et al., 2015; Mou et al., 2016) or neural network-based combination Bowman et al. (2015) are used for sequence matching.
Attentive network: Soft-attention mechanism (Bahdanau et al., 2014; Luong et al., 2015) has been widely used for sequence matching in machine comprehension (Hermann et al., 2015), text entailment (Rocktäschel et al., 2015) and question answering (Tan et al., 2016). Instead of using the final state of RNN to represent a sequence, these studies use weighted sum of all the states for the sequence representation.
Compare-Aggregate network: This kind of framework is to perform the word level matching (Wang & Jiang, 2016a; Parikh et al., 2016; He & Lin, 2016; Trischler et al., 2016; Wan et al.,
2016). Our work is under this framework. But our structure is different from previous models and our model can be applied on different tasks. Besides, we analyzed different word-level comparison functions separately.
5 CONCLUSIONS
In this paper, we systematically analyzed the effectiveness of a “compare-aggregate” model on four different datasets representing different tasks. Moreover, we compared and tested different kinds of word-level comparison functions and found that some element-wise comparison functions can outperform the others. According to our experiment results, many different tasks can share the same “compare-aggregate” structure. In the future work, we would like to test its effectiveness on multi-task learning.
6 ACKNOWLEDGMENTS
This research is supported by the National Research Foundation, Prime Ministers Office, Singapore under its International Research Centres in Singapore Funding Initiative. | 1. What is the focus of the paper in terms of NLP tasks?
2. Can you describe the basic framework of the proposed model?
3. What are the different methods for matching text sequences compared in the paper?
4. How do element-wise subtraction/multiplication operations contribute to the model's performance?
5. What are the limitations or weaknesses of the paper's approach? | Review | Review
This paper proposed a compare-aggregate model for the NLP tasks that require semantically comparing the text sequences, such as question answering and textual entailment.
The basic framework of this model is to apply a convolutional neural network (aggregation) after a element-wise operation (comparison) over the attentive outputs of the LSTMs.
The highlighted part is the comparison, where this paper compares several different methods for matching text sequences, and the element-wise subtraction/multiplication operations are demonstrated to achieve generally better performance on four different datasets.
While the weak point is that this is an incremental work and a bit lack of innovation. A qualitative evaluation about how subtraction, multiplication and other comparison functions perform on varied kinds of sentences would be more interesting. |
ICLR | Title
A Compare-Aggregate Model for Matching Text Sequences
Abstract
Many NLP tasks including machine comprehension, answer selection and text entailment require the comparison between sequences. Matching the important units between sequences is a key to solve these problems. In this paper, we present a general “compare-aggregate” framework that performs word-level matching followed by aggregation using Convolutional Neural Networks. We particularly focus on the different comparison functions we can use to match two vectors. We use four different datasets to evaluate the model. We find that some simple comparison functions based on element-wise operations can work better than standard neural network and neural tensor network.
1 INTRODUCTION
Many natural language processing problems involve matching two or more sequences to make a decision. For example, in textual entailment, one needs to determine whether a hypothesis sentence can be inferred from a premise sentence (Bowman et al., 2015). In machine comprehension, given a passage, a question needs to be matched against it in order to find the correct answer (Richardson et al., 2013; Tapaswi et al., 2016). Table 1 gives two example sequence matching problems. In the first example, a passage, a question and four candidate answers are given. We can see that to get the correct answer, we need to match the question against the passage and identify the last sentence to be the answer-bearing sentence. In the second example, given a question and a set of candidate answers, we need to find the answer that best matches the question. Because of the fundamental importance of comparing two sequences of text to judge their semantic similarity or relatedness, sequence matching has been well studied in natural language processing.
With recent advances of neural network models in natural language processing, a standard practice for sequence modeling now is to encode a sequence of text as an embedding vector using models such as RNN and CNN. To match two sequences, a straightforward approach is to encode each sequence as a vector and then to combine the two vectors to make a decision (Bowman et al., 2015; Feng et al., 2015). However, it has been found that using a single vector to encode an entire sequence is not sufficient to capture all the important information from the sequence, and therefore advanced techniques such as attention mechanisms and memory networks have been applied to sequence matching problems (Hermann et al., 2015; Hill et al., 2016; Rocktäschel et al., 2015).
A common trait of a number of these recent studies on sequence matching problems is the use of a “compare-aggregate” framework (Wang & Jiang, 2016b; He & Lin, 2016; Parikh et al., 2016). In such a framework, comparison of two sequences is not done by comparing two vectors each representing an entire sequence. Instead, these models first compare vector representations of smaller units such as words from these sequences and then aggregate these comparison results to make the final decision. For example, the match-LSTM model proposed by Wang & Jiang (2016b) for textual entailment first compares each word in the hypothesis with an attention-weighted version of the premise. The comparison results are then aggregated through an LSTM. He & Lin (2016) proposed a pairwise word interaction model that first takes each pair of words from two sequences and applies a comparison unit on the two words. It then combines the results of these word interactions using a similarity focus layer followed by a multi-layer CNN. Parikh et al. (2016) proposed a decomposable attention model for textual entailment, in which words from each sequence are compared with an
attention-weighted version of the other sequence to produce a series of comparison vectors. The comparison vectors are then aggregated and fed into a feed forward network for final classification.
Although these studies have shown the effectiveness of such a “compare-aggregate” framework for sequence matching, there are at least two limitations with these previous studies: (1) Each of the models proposed in these studies is tested on one or two tasks only, but we hypothesize that this general framework is effective on many sequence matching problems. There has not been any study that empirically verifies this. (2) More importantly, these studies did not pay much attention to the comparison function that is used to compare two small textual units. Usually a standard feedforward network is used (Hu et al., 2014; Wang & Jiang, 2016b) to combine two vectors representing two units that need to be compared, e.g., two words. However, based on the nature of these sequence matching problems, we essentially need to measure how semantically similar the two sequences are. Presumably, this property of these sequence matching problems should guide us in choosing more appropriate comparison functions. Indeed He & Lin (2016) used cosine similarity, Euclidean distance and dot product to define the comparison function, which seem to be better justifiable. But they did not systematically evaluate these similarity or distance functions or compare them with a standard feedforward network.
In this paper, we argue that the general “compare-aggregate” framework is effective for a wide range of sequence matching problems. We present a model that follows this general framework and test it on four different datasets, namely, MovieQA, InsuranceQA, WikiQA and SNLI. The first three datasets are for Question Answering, but the setups of the tasks are quite different. The last dataset is for textual entailment. More importantly, we systematically present and test six different comparison functions. We find that overall a comparison function based on element-wise subtraction and multiplication works the best on the four datasets.
The contributions of this work are twofold: (1) Using four different datasets, we show that our model following the “compare-aggregate” framework is very effective when compared with the state-ofthe-art performance on these datasets. (2) We conduct systematic evaluation of different comparison functions and show that a comparison function based on element-wise operations, which is not widely used for word-level matching, works the best across the different datasets. We believe that these findings will be useful for future research on sequence matching problems. We have also made our code available online.1
2 METHOD
In this section, we propose a general model following the “compare-aggregate” framework for matching two sequences. This general model can be applied to different tasks. We focus our discussion on six different comparison functions that can be plugged into this general “compare-aggregate” model. In particular, we hypothesize that two comparison functions based on element-wise operations, SUB and MULT, are good middle ground between highly flexible functions using standard neural network models and highly restrictive functions based on cosine similarity and/or Euclidean
1https://github.com/shuohangwang/SeqMatchSeq
distance. As we will show in the experiment section, these comparison functions based on elementwise operations can indeed perform very well on a number of sequence matching problems.
2.1 PROBLEM DEFINITION AND MODEL OVERVIEW
The general setup of the sequence matching problem we consider is the following. We assume there are two sequences to be matched. We use two matrices Q ∈ Rd×Q and A ∈ Rd×A to represent the word embeddings of the two sequences, where Q and A are the lengths of the two sequences, respectively, and d is the dimensionality of the word embeddings. In other words, each column vector of Q or A is an embedding vector representing a single word. Given a pair of Q and A, the goal is to predict a label y. For example, in textual entailment, Q may represent a premise and A a hypothesis, and y indicates whether Q entails A or contradicts A. In question answering, Q may be a question and A a candidate answer, and y indicates whether A is the correct answer to Q.
We treat the problem as a supervised learning task. We assume that a set of training examples in the form of (Q,A, y) is given and we aim to learn a model that maps any pair of (Q,A) to a y.
An overview of our model is shown in Figure 1. The model can be divided into the following four layers:
1. Preprocessing: We use a preprocessing layer (not shown in the figure) to process Q and A to obtain two new matrices Q ∈ Rl×Q and A ∈ Rl×A. The purpose here is to use some gate values to control the importance of different words in making the predictions on the sequence pair. For example, qi ∈ Rl, which is the ith column vector of Q, encodes the ith word in Q.
2. Attention: We apply a standard attention mechanism on Q and A to obtain attention weights over the column vectors in Q for each column vector in A. With these attention weights, for each column vector aj in A, we obtain a corresponding vector hj , which is an attention-weighted sum of the column vectors of Q.
3. Comparison: We use a comparison function f to combine each pair of aj and hj into a vector tj .
4. Aggregation: We use a CNN layer to aggregate the sequence of vectors tj for the final classification.
Although this model follows more or less the same framework as the model proposed by Parikh et al. (2016), our work has some notable differences. First, we will pay much attention to the comparison function f and compare a number of options, including some uncommon ones based on elementwise operations. Second, we apply our model to four different datasets representing four different tasks to evaluate its general effectiveness for sequence matching problems. There are also some other differences from the work by Parikh et al. (2016). For example, we use a CNN layer instead of summation and concatenation for aggregation. Our attention mechanism is one-directional instead of two-directional.
In the rest of this section we will present the model in detail. We will focus mostly on the comparison functions we consider.
2.2 PREPROCESSING AND ATTENTION
Inspired by the use of gates in LSTM and GRU, we preprocess Q and A with the following formulas:
Q = σ(WiQ+ bi ⊗ eQ) tanh(WuQ+ bu ⊗ eQ), A = σ(WiA+ bi ⊗ eA) tanh(WuA+ bu ⊗ eA), (1)
where is element-wise multiplication, and Wi,Wu ∈ Rl×d and bi,bu ∈ Rl are parameters to be learned. The outer product (· ⊗ eX) produces a matrix or row vector by repeating the vector or scalar on the left for X times. Here σ(WiQ + bi ⊗ eQ) and σ(WiA + bi ⊗ eA) act as gate values to control the degree to which the original values of Q and A are preserved in Q and A. For example, for stop words, their gate values would likely be low for tasks where stop words make little difference to the final predictions.
In this preprocessing step, the word order does not matter. Although a better way would be to use RNN such as LSTM and GRU to chain up the words such that we can capture some contextual information, this could be computationally expensive for long sequences. In our experiments, we only incorporated LSTM into the formulas above for the SNLI task.
The general attention (Luong et al., 2015) layer is built on top of the resulting Q and A as follows: G = softmax ( (WgQ+ bg ⊗ eQ)TA ) ,
H = QG, (2)
where Wg ∈ Rl×l and bg ∈ Rl are parameters to be learned, G ∈ RQ×A is the attention weight matrix, and H ∈ Rl×A are the attention-weighted vectors. Specifically, hj , which is the jth column vector of H, is a weighted sum of the column vectors of Q and represents the part of Q that best matches the jth word in A. Next we will combine hj and aj using a comparison function.
2.3 COMPARISON
The goal of the comparison layer is to match each aj , which represents the jth word and its context in A, with hj , which represents a weighted version of Q that best matches aj . Let f denote a comparison function that transforms aj and hj into a vector tj to represent the comparison result.
A natural choice of f is a standard neural network layer that consists of a linear transformation followed by a non-linear activation function. For example, we can consider the following choice:
NEURALNET (NN): tj = f(aj ,hj) = ReLU(W [ aj hj ] + b), (3)
where matrix W ∈ Rl×2l and vector b ∈ Rl are parameters to be learned. Alternatively, another natural choice is a neural tensor network (Socher et al., 2013) as follows:
NEURALTENSORNET (NTN): tj = f(aj ,hj) = ReLU(aTjT [1...l]hj + b), (4)
where tensor T[1...l] ∈ Rl×l×l and vector b ∈ Rl are parameters to be learned.
However, we note that for many sequence matching problems, we intend to measure the semantic similarity or relatedness of the two sequences. So at the word level, we also intend to check how similar or related aj is to hj . For this reason, a more natural choice used in some previous work is Euclidean distance or cosine similarity between aj and hj . We therefore consider the following definition of f :
EUCLIDEAN+COSINE (EUCCOS): tj = f(aj ,hj) = [ ‖aj − hj‖2 cos(aj ,hj) ] . (5)
Note that with EUCCOS, the resulting vector tj is only a 2-dimensional vector. Although EUCCOS is a well-justified comparison function, we suspect that it may lose some useful information from the original vectors aj and hj . On the other hand, NN and NTN are too general and thus do not capture the intuition that we care mostly about the similarity between aj and hj .
To use something that is a good compromise between the two extreme cases, we consider the following two new comparison functions, which operate on the two vectors in an element-wise manner. These functions have been used previously by Mou et al. (2016).
SUBTRACTION (SUB): tj = f(aj ,hj) = (aj − hj) (aj − hj), (6) MULTIPLICATION (MULT): tj = f(aj ,hj) = aj hj . (7)
Note that the operator is element-wise multiplication. For both comparison functions, the resulting vector tj has the same dimensionality as aj and hj .
We can see that SUB is closely related to Euclidean distance in that Euclidean distance is the sum of all the entries of the vector tj produced by SUB. But by not summing up these entries, SUB preserves some information about the different dimensions of the original two vectors. Similarly, MULT is closely related to cosine similarity but preserves some information about the original two vectors.
Finally, we consider combining SUB and MULT followed by an NN layer as follows: SUBMULT+NN: tj = f(aj ,hj) = ReLU(W [ (aj − hj) (aj − hj)
aj hj
] + b). (8)
In summary, we consider six different comparison functions: NN, NTN, EUCCOS, SUB, MULT and SUBMULT+NN. Among these functions, the last three (SUB, MULT and SUBMULT+NN) have not been widely used in previous work for word-level matching.
2.4 AGGREGATION
After we apply the comparison function to each pair of aj and hj to obtain a series of vectors tj , finally we aggregate these vectors using a one-layer CNN (Kim, 2014):
r = CNN([t1, . . . , tA]). (9)
r ∈ Rnl is then used for the final classification, where n is the number of windows in CNN.
3 EXPERIMENTS
In this section, we evaluate our model on four different datasets representing different tasks. The first three datasets are question answering tasks while the last one is on textual entailment. The statistics of the four datasets are shown in Table 2. We will fist introduce the task settings and the way we customize the “compare-aggregate” structure to each task. Then we will show the baselines for the different datasets. Finally, we discuss the experiment results shown in Table 3 and the ablation study shown in Table 4.
3.1 TASK-SPECIFIC MODEL STRUCTURES
In all these tasks, we use matrix Q ∈ Rd×Q to represent the question or premise and matrix Ak ∈ Rd×Ak (k ∈ [1,K]) to represent the kth answer or the hypothesis. For the machine comprehension task MovieQA (Tapaswi et al., 2016), there is also a matrix P ∈ Rd×P that represents the plot of a movie. Here Q is the length of the question or premise, Ak the length of the kth answer, and P the length of the plot.
For the SNLI (Bowman et al., 2015) dataset, the task is text entailment, which identifies the relationship (entailment, contradiction or neutral) between a premise sentence and a hypothesis sentence. Here K = 1, and there are exactly two sequences to match. The actual model structure is what we have described before.
For the InsuranceQA (Feng et al., 2015) dataset, the task is an answer selection task which needs to select the correct answer for a question from a candidate pool. For the WikiQA (Yang et al., 2015) datasets, we need to rank the candidate answers according to a question. For both tasks,
there are K candidate answers for each question. Let us use rk to represent the resulting vector produced by Eqn. 9 for the kth answer. In order to select one of the K answers, we first define R = [r1, r2, . . . , rK ]. We then compute the probability of the kth answer to be the correct one as follows:
p(k|R) = softmax(wT tanh(WsR+ bs ⊗ eK) + b⊗ eK), (10) where Ws ∈ Rl×nl, w ∈ Rl, bs ∈ Rl, b ∈ R are parameters to be learned. For the machine comprehension task MovieQA, each question is related to Plot Synopses written by fans after watching the movie and each question has five candidate answers. So for each candidate answer there are three sequences to be matched: the plot P, the question Q and the answer Ak. For each k, we first match Q and P and refer to the matching result at position j as tqj , as generated by one of the comparison functions f . Similarly, we also match Ak with P and refer to the matching result at position j as tak,j . We then define
tk,j = [ tqj tak,j ] ,
and
rk = CNN([tk,1, . . . , tk,P ]).
To select an answer from the K candidate answers, again we use Eqn. 10 to compute the probabilities.
The implementation details of the modes are as follows. The word embeddings are initialized from GloVe (Pennington et al., 2014). During training, they are not updated. The word embeddings not found in GloVe are initialized with zero.
The dimensionality l of the hidden layers is set to be 150. We use ADAMAX (Kingma & Ba, 2015) with the coefficients β1 = 0.9 and β2 = 0.999 to optimize the model. We do not use L2regularization. The main parameter we tuned is the dropout on the embedding layer. For WikiQA, which is a relatively small dataset, we also tune the learning rate and the batch size. For the others, we set the batch size to be 30 and the learning rate 0.002.
3.2 BASELINES
Here, we will introduce the baselines for each dataset. We did not re-implement these models but simply took the reported performance for the purpose of comparison.
SNLI: •W-by-W Attention: The model by Rocktäschel et al. (2015), who first introduced attention mechanism into text entailment. • match-LSTM: The model by Wang & Jiang (2016b), which concatenates the matched words as the inputs of an LSTM. • LSTMN: Long short-term memorynetworks proposed by Cheng et al. (2016). • Decomp Attention: Another “compare-aggregate” model proposed by Parikh et al. (2016). • EBIM+TreeLSTM: The state-of-the-art model proposed by Chen et al. (2016) on the SNLI dataset.
InsuranceQA: • IR model: This model by Bendersky et al. (2010) learns the concept information to help rank the candidates. • CNN with GESD: This model by Feng et al. (2015) uses Euclidean distance and dot product between sequence representations built through convolutional neural networks to select the answer. • Attentive LSTM: Tan et al. (2016) used soft-attention mechanism to select the most important information from the candidates according to the representation of the questions. • IARNN-Occam: This model by Wang et al. (2016) adds regularization on the attention weights. • IARNN-Gate: This model by Wang et al. (2016) uses the representation of the question to build the GRU gates for each candidate answer.
WikiQA: • IARNN-Occam and IARNN-Gate as introduced before. • CNN-Cnt: This model by Yang et al. (2015) combines sentence representations built by a convolutional neural network with logistic regression. • ABCNN: This model is Attention-Based Convolutional Neural Network proposed by Yin et al. (2015). • CubeCNN proposed by He & Lin (2016) builds a CNN on all pairs of word similarity.
MovieQA: All the baselines we consider come from Tapaswi et al. (2016)’s work: • Cosine Word2Vec: A sliding window is used to select the answer according to the similarities computed
through Word2Vec between the sentences in plot and the question/answer. • Cosine TFIDF: This model is similar to the previous method but uses bag-of-word with tf-idf scores to compute similarity. • SSCB TFIDF: Instead of using the sliding window method, a convolutional neural network is built on the sentence level similarities.
3.3 ANALYSIS OF RESULTS
We use accuracy as the evaluation metric for the datasets MovieQA, InsuranceQA and SNLI, as there is only one correct answer or one label for each instance. For WikiQA, there may be multiple correct answers, so evaluation metrics we use are Mean Average Precision (MAP) and Mean Reciprocal Rank (MRR).
We observe the following from the results. (1) Overall, we can find that our general “compareaggregate” structure achieves the best performance on MovieQA, InsuranceQA, WikiQA datasets and very competitive performance on the SNLI dataset. Especially for the InsuranceQA dataset, with any comparison function we use, our model can outperform all the previous models. (2) The comparison method SUBMULT+NN is the best in general. (3) Some simple comparison functions can achieve better performance than the neural networks or neural tensor network comparison functions. For example, the simplest comparison function EUCCOS achieves nearly the best performance in the MovieQA dataset, and the element-wise comparison functions, which do not need parameters can achieve the best performance on the WikiQA dataset. (4) We find the preprocessing layer and the attention layer for word selection to be important in the “compare-aggregate” structure through the experiments of removing these two layers separately. We also see that for sequence matching with big difference in length, such as the MovieQA and InsuranceQA tasks, the attention layer plays a more important role. For sequence matching with smaller difference in length, such as the WikiQA and SNLI tasks, the pre-processing layer plays a more important role. (5) For the MovieQA, InsuranceQA and WikiQA tasks, our preprocessing layer is order-insensitive so that it will not take the context information into consideration during the comparison, but our model can still outperform the previous work with order-sensitive preprocessing layer. With this finding, we believe the word-by-word comparison part plays a very important role in these tasks. We will further explore the preprocessing layer in the future.
3.4 FURTHER ANALYSES
To further explain how our model works, we visualize the max values in each dimension of the convolutional layer. We use two examples shown in Table 1 from MovieQA and InsuranceQA datasets respectively. In the top of Figure 2, we can see that the plot words that also appear in either the question or the answer will draw more attention by the CNN. We hypothesize that if the nearby words in the plot can match both the words in question and the words in one answer, then this answer is more likely to be the correct one. Similarly, the bottom one of Figure 2 also shows that the CNN will focus more on the matched word representations. If the words in one answer continuously match the words in the question, this answer is more likely to be the correct one.
4 RELATED WORK
We review related work in three types of general structures for matching sequences.
Siamense network: These kinds of models use the same structure, such as RNN or CNN, to build the representations for the sequences separately and then use them for classification. Then cosine similarity (Feng et al., 2015; Yang et al., 2015), element-wise operation (Tai et al., 2015; Mou et al., 2016) or neural network-based combination Bowman et al. (2015) are used for sequence matching.
Attentive network: Soft-attention mechanism (Bahdanau et al., 2014; Luong et al., 2015) has been widely used for sequence matching in machine comprehension (Hermann et al., 2015), text entailment (Rocktäschel et al., 2015) and question answering (Tan et al., 2016). Instead of using the final state of RNN to represent a sequence, these studies use weighted sum of all the states for the sequence representation.
Compare-Aggregate network: This kind of framework is to perform the word level matching (Wang & Jiang, 2016a; Parikh et al., 2016; He & Lin, 2016; Trischler et al., 2016; Wan et al.,
2016). Our work is under this framework. But our structure is different from previous models and our model can be applied on different tasks. Besides, we analyzed different word-level comparison functions separately.
5 CONCLUSIONS
In this paper, we systematically analyzed the effectiveness of a “compare-aggregate” model on four different datasets representing different tasks. Moreover, we compared and tested different kinds of word-level comparison functions and found that some element-wise comparison functions can outperform the others. According to our experiment results, many different tasks can share the same “compare-aggregate” structure. In the future work, we would like to test its effectiveness on multi-task learning.
6 ACKNOWLEDGMENTS
This research is supported by the National Research Foundation, Prime Ministers Office, Singapore under its International Research Centres in Singapore Funding Initiative. | 1. What is the focus of the paper, and what does it propose?
2. What are the strengths of the proposed approach, particularly in terms of its comparison functions and use of convolutional neural networks?
3. What are the weaknesses of the paper, especially regarding its experiments and comparisons with other works?
4. How does the reviewer assess the clarity and quality of the paper's content?
5. Are there any concerns or suggestions regarding the presentation of the results, such as the legibility of figures? | Review | Review
This paper proposes a compare-aggregate framework that performs word-level matching followed by aggregation with convolutional neural networks. It compares six different comparison functions and evaluates them on four datasets. Extensive experimental results have been reported and compared against various published baselines.
The paper is well written overall.
A few detailed comments:
* page 4, line5: including a some -> including some
* What's the benefit of the preprocessing and attention step? Can you provide the results without it?
* Figure 2 is hard to read, esp. when on printed hard copy. Please enhance the quality. |
ICLR | Title
A Compare-Aggregate Model for Matching Text Sequences
Abstract
Many NLP tasks including machine comprehension, answer selection and text entailment require the comparison between sequences. Matching the important units between sequences is a key to solve these problems. In this paper, we present a general “compare-aggregate” framework that performs word-level matching followed by aggregation using Convolutional Neural Networks. We particularly focus on the different comparison functions we can use to match two vectors. We use four different datasets to evaluate the model. We find that some simple comparison functions based on element-wise operations can work better than standard neural network and neural tensor network.
1 INTRODUCTION
Many natural language processing problems involve matching two or more sequences to make a decision. For example, in textual entailment, one needs to determine whether a hypothesis sentence can be inferred from a premise sentence (Bowman et al., 2015). In machine comprehension, given a passage, a question needs to be matched against it in order to find the correct answer (Richardson et al., 2013; Tapaswi et al., 2016). Table 1 gives two example sequence matching problems. In the first example, a passage, a question and four candidate answers are given. We can see that to get the correct answer, we need to match the question against the passage and identify the last sentence to be the answer-bearing sentence. In the second example, given a question and a set of candidate answers, we need to find the answer that best matches the question. Because of the fundamental importance of comparing two sequences of text to judge their semantic similarity or relatedness, sequence matching has been well studied in natural language processing.
With recent advances of neural network models in natural language processing, a standard practice for sequence modeling now is to encode a sequence of text as an embedding vector using models such as RNN and CNN. To match two sequences, a straightforward approach is to encode each sequence as a vector and then to combine the two vectors to make a decision (Bowman et al., 2015; Feng et al., 2015). However, it has been found that using a single vector to encode an entire sequence is not sufficient to capture all the important information from the sequence, and therefore advanced techniques such as attention mechanisms and memory networks have been applied to sequence matching problems (Hermann et al., 2015; Hill et al., 2016; Rocktäschel et al., 2015).
A common trait of a number of these recent studies on sequence matching problems is the use of a “compare-aggregate” framework (Wang & Jiang, 2016b; He & Lin, 2016; Parikh et al., 2016). In such a framework, comparison of two sequences is not done by comparing two vectors each representing an entire sequence. Instead, these models first compare vector representations of smaller units such as words from these sequences and then aggregate these comparison results to make the final decision. For example, the match-LSTM model proposed by Wang & Jiang (2016b) for textual entailment first compares each word in the hypothesis with an attention-weighted version of the premise. The comparison results are then aggregated through an LSTM. He & Lin (2016) proposed a pairwise word interaction model that first takes each pair of words from two sequences and applies a comparison unit on the two words. It then combines the results of these word interactions using a similarity focus layer followed by a multi-layer CNN. Parikh et al. (2016) proposed a decomposable attention model for textual entailment, in which words from each sequence are compared with an
attention-weighted version of the other sequence to produce a series of comparison vectors. The comparison vectors are then aggregated and fed into a feed forward network for final classification.
Although these studies have shown the effectiveness of such a “compare-aggregate” framework for sequence matching, there are at least two limitations with these previous studies: (1) Each of the models proposed in these studies is tested on one or two tasks only, but we hypothesize that this general framework is effective on many sequence matching problems. There has not been any study that empirically verifies this. (2) More importantly, these studies did not pay much attention to the comparison function that is used to compare two small textual units. Usually a standard feedforward network is used (Hu et al., 2014; Wang & Jiang, 2016b) to combine two vectors representing two units that need to be compared, e.g., two words. However, based on the nature of these sequence matching problems, we essentially need to measure how semantically similar the two sequences are. Presumably, this property of these sequence matching problems should guide us in choosing more appropriate comparison functions. Indeed He & Lin (2016) used cosine similarity, Euclidean distance and dot product to define the comparison function, which seem to be better justifiable. But they did not systematically evaluate these similarity or distance functions or compare them with a standard feedforward network.
In this paper, we argue that the general “compare-aggregate” framework is effective for a wide range of sequence matching problems. We present a model that follows this general framework and test it on four different datasets, namely, MovieQA, InsuranceQA, WikiQA and SNLI. The first three datasets are for Question Answering, but the setups of the tasks are quite different. The last dataset is for textual entailment. More importantly, we systematically present and test six different comparison functions. We find that overall a comparison function based on element-wise subtraction and multiplication works the best on the four datasets.
The contributions of this work are twofold: (1) Using four different datasets, we show that our model following the “compare-aggregate” framework is very effective when compared with the state-ofthe-art performance on these datasets. (2) We conduct systematic evaluation of different comparison functions and show that a comparison function based on element-wise operations, which is not widely used for word-level matching, works the best across the different datasets. We believe that these findings will be useful for future research on sequence matching problems. We have also made our code available online.1
2 METHOD
In this section, we propose a general model following the “compare-aggregate” framework for matching two sequences. This general model can be applied to different tasks. We focus our discussion on six different comparison functions that can be plugged into this general “compare-aggregate” model. In particular, we hypothesize that two comparison functions based on element-wise operations, SUB and MULT, are good middle ground between highly flexible functions using standard neural network models and highly restrictive functions based on cosine similarity and/or Euclidean
1https://github.com/shuohangwang/SeqMatchSeq
distance. As we will show in the experiment section, these comparison functions based on elementwise operations can indeed perform very well on a number of sequence matching problems.
2.1 PROBLEM DEFINITION AND MODEL OVERVIEW
The general setup of the sequence matching problem we consider is the following. We assume there are two sequences to be matched. We use two matrices Q ∈ Rd×Q and A ∈ Rd×A to represent the word embeddings of the two sequences, where Q and A are the lengths of the two sequences, respectively, and d is the dimensionality of the word embeddings. In other words, each column vector of Q or A is an embedding vector representing a single word. Given a pair of Q and A, the goal is to predict a label y. For example, in textual entailment, Q may represent a premise and A a hypothesis, and y indicates whether Q entails A or contradicts A. In question answering, Q may be a question and A a candidate answer, and y indicates whether A is the correct answer to Q.
We treat the problem as a supervised learning task. We assume that a set of training examples in the form of (Q,A, y) is given and we aim to learn a model that maps any pair of (Q,A) to a y.
An overview of our model is shown in Figure 1. The model can be divided into the following four layers:
1. Preprocessing: We use a preprocessing layer (not shown in the figure) to process Q and A to obtain two new matrices Q ∈ Rl×Q and A ∈ Rl×A. The purpose here is to use some gate values to control the importance of different words in making the predictions on the sequence pair. For example, qi ∈ Rl, which is the ith column vector of Q, encodes the ith word in Q.
2. Attention: We apply a standard attention mechanism on Q and A to obtain attention weights over the column vectors in Q for each column vector in A. With these attention weights, for each column vector aj in A, we obtain a corresponding vector hj , which is an attention-weighted sum of the column vectors of Q.
3. Comparison: We use a comparison function f to combine each pair of aj and hj into a vector tj .
4. Aggregation: We use a CNN layer to aggregate the sequence of vectors tj for the final classification.
Although this model follows more or less the same framework as the model proposed by Parikh et al. (2016), our work has some notable differences. First, we will pay much attention to the comparison function f and compare a number of options, including some uncommon ones based on elementwise operations. Second, we apply our model to four different datasets representing four different tasks to evaluate its general effectiveness for sequence matching problems. There are also some other differences from the work by Parikh et al. (2016). For example, we use a CNN layer instead of summation and concatenation for aggregation. Our attention mechanism is one-directional instead of two-directional.
In the rest of this section we will present the model in detail. We will focus mostly on the comparison functions we consider.
2.2 PREPROCESSING AND ATTENTION
Inspired by the use of gates in LSTM and GRU, we preprocess Q and A with the following formulas:
Q = σ(WiQ+ bi ⊗ eQ) tanh(WuQ+ bu ⊗ eQ), A = σ(WiA+ bi ⊗ eA) tanh(WuA+ bu ⊗ eA), (1)
where is element-wise multiplication, and Wi,Wu ∈ Rl×d and bi,bu ∈ Rl are parameters to be learned. The outer product (· ⊗ eX) produces a matrix or row vector by repeating the vector or scalar on the left for X times. Here σ(WiQ + bi ⊗ eQ) and σ(WiA + bi ⊗ eA) act as gate values to control the degree to which the original values of Q and A are preserved in Q and A. For example, for stop words, their gate values would likely be low for tasks where stop words make little difference to the final predictions.
In this preprocessing step, the word order does not matter. Although a better way would be to use RNN such as LSTM and GRU to chain up the words such that we can capture some contextual information, this could be computationally expensive for long sequences. In our experiments, we only incorporated LSTM into the formulas above for the SNLI task.
The general attention (Luong et al., 2015) layer is built on top of the resulting Q and A as follows: G = softmax ( (WgQ+ bg ⊗ eQ)TA ) ,
H = QG, (2)
where Wg ∈ Rl×l and bg ∈ Rl are parameters to be learned, G ∈ RQ×A is the attention weight matrix, and H ∈ Rl×A are the attention-weighted vectors. Specifically, hj , which is the jth column vector of H, is a weighted sum of the column vectors of Q and represents the part of Q that best matches the jth word in A. Next we will combine hj and aj using a comparison function.
2.3 COMPARISON
The goal of the comparison layer is to match each aj , which represents the jth word and its context in A, with hj , which represents a weighted version of Q that best matches aj . Let f denote a comparison function that transforms aj and hj into a vector tj to represent the comparison result.
A natural choice of f is a standard neural network layer that consists of a linear transformation followed by a non-linear activation function. For example, we can consider the following choice:
NEURALNET (NN): tj = f(aj ,hj) = ReLU(W [ aj hj ] + b), (3)
where matrix W ∈ Rl×2l and vector b ∈ Rl are parameters to be learned. Alternatively, another natural choice is a neural tensor network (Socher et al., 2013) as follows:
NEURALTENSORNET (NTN): tj = f(aj ,hj) = ReLU(aTjT [1...l]hj + b), (4)
where tensor T[1...l] ∈ Rl×l×l and vector b ∈ Rl are parameters to be learned.
However, we note that for many sequence matching problems, we intend to measure the semantic similarity or relatedness of the two sequences. So at the word level, we also intend to check how similar or related aj is to hj . For this reason, a more natural choice used in some previous work is Euclidean distance or cosine similarity between aj and hj . We therefore consider the following definition of f :
EUCLIDEAN+COSINE (EUCCOS): tj = f(aj ,hj) = [ ‖aj − hj‖2 cos(aj ,hj) ] . (5)
Note that with EUCCOS, the resulting vector tj is only a 2-dimensional vector. Although EUCCOS is a well-justified comparison function, we suspect that it may lose some useful information from the original vectors aj and hj . On the other hand, NN and NTN are too general and thus do not capture the intuition that we care mostly about the similarity between aj and hj .
To use something that is a good compromise between the two extreme cases, we consider the following two new comparison functions, which operate on the two vectors in an element-wise manner. These functions have been used previously by Mou et al. (2016).
SUBTRACTION (SUB): tj = f(aj ,hj) = (aj − hj) (aj − hj), (6) MULTIPLICATION (MULT): tj = f(aj ,hj) = aj hj . (7)
Note that the operator is element-wise multiplication. For both comparison functions, the resulting vector tj has the same dimensionality as aj and hj .
We can see that SUB is closely related to Euclidean distance in that Euclidean distance is the sum of all the entries of the vector tj produced by SUB. But by not summing up these entries, SUB preserves some information about the different dimensions of the original two vectors. Similarly, MULT is closely related to cosine similarity but preserves some information about the original two vectors.
Finally, we consider combining SUB and MULT followed by an NN layer as follows: SUBMULT+NN: tj = f(aj ,hj) = ReLU(W [ (aj − hj) (aj − hj)
aj hj
] + b). (8)
In summary, we consider six different comparison functions: NN, NTN, EUCCOS, SUB, MULT and SUBMULT+NN. Among these functions, the last three (SUB, MULT and SUBMULT+NN) have not been widely used in previous work for word-level matching.
2.4 AGGREGATION
After we apply the comparison function to each pair of aj and hj to obtain a series of vectors tj , finally we aggregate these vectors using a one-layer CNN (Kim, 2014):
r = CNN([t1, . . . , tA]). (9)
r ∈ Rnl is then used for the final classification, where n is the number of windows in CNN.
3 EXPERIMENTS
In this section, we evaluate our model on four different datasets representing different tasks. The first three datasets are question answering tasks while the last one is on textual entailment. The statistics of the four datasets are shown in Table 2. We will fist introduce the task settings and the way we customize the “compare-aggregate” structure to each task. Then we will show the baselines for the different datasets. Finally, we discuss the experiment results shown in Table 3 and the ablation study shown in Table 4.
3.1 TASK-SPECIFIC MODEL STRUCTURES
In all these tasks, we use matrix Q ∈ Rd×Q to represent the question or premise and matrix Ak ∈ Rd×Ak (k ∈ [1,K]) to represent the kth answer or the hypothesis. For the machine comprehension task MovieQA (Tapaswi et al., 2016), there is also a matrix P ∈ Rd×P that represents the plot of a movie. Here Q is the length of the question or premise, Ak the length of the kth answer, and P the length of the plot.
For the SNLI (Bowman et al., 2015) dataset, the task is text entailment, which identifies the relationship (entailment, contradiction or neutral) between a premise sentence and a hypothesis sentence. Here K = 1, and there are exactly two sequences to match. The actual model structure is what we have described before.
For the InsuranceQA (Feng et al., 2015) dataset, the task is an answer selection task which needs to select the correct answer for a question from a candidate pool. For the WikiQA (Yang et al., 2015) datasets, we need to rank the candidate answers according to a question. For both tasks,
there are K candidate answers for each question. Let us use rk to represent the resulting vector produced by Eqn. 9 for the kth answer. In order to select one of the K answers, we first define R = [r1, r2, . . . , rK ]. We then compute the probability of the kth answer to be the correct one as follows:
p(k|R) = softmax(wT tanh(WsR+ bs ⊗ eK) + b⊗ eK), (10) where Ws ∈ Rl×nl, w ∈ Rl, bs ∈ Rl, b ∈ R are parameters to be learned. For the machine comprehension task MovieQA, each question is related to Plot Synopses written by fans after watching the movie and each question has five candidate answers. So for each candidate answer there are three sequences to be matched: the plot P, the question Q and the answer Ak. For each k, we first match Q and P and refer to the matching result at position j as tqj , as generated by one of the comparison functions f . Similarly, we also match Ak with P and refer to the matching result at position j as tak,j . We then define
tk,j = [ tqj tak,j ] ,
and
rk = CNN([tk,1, . . . , tk,P ]).
To select an answer from the K candidate answers, again we use Eqn. 10 to compute the probabilities.
The implementation details of the modes are as follows. The word embeddings are initialized from GloVe (Pennington et al., 2014). During training, they are not updated. The word embeddings not found in GloVe are initialized with zero.
The dimensionality l of the hidden layers is set to be 150. We use ADAMAX (Kingma & Ba, 2015) with the coefficients β1 = 0.9 and β2 = 0.999 to optimize the model. We do not use L2regularization. The main parameter we tuned is the dropout on the embedding layer. For WikiQA, which is a relatively small dataset, we also tune the learning rate and the batch size. For the others, we set the batch size to be 30 and the learning rate 0.002.
3.2 BASELINES
Here, we will introduce the baselines for each dataset. We did not re-implement these models but simply took the reported performance for the purpose of comparison.
SNLI: •W-by-W Attention: The model by Rocktäschel et al. (2015), who first introduced attention mechanism into text entailment. • match-LSTM: The model by Wang & Jiang (2016b), which concatenates the matched words as the inputs of an LSTM. • LSTMN: Long short-term memorynetworks proposed by Cheng et al. (2016). • Decomp Attention: Another “compare-aggregate” model proposed by Parikh et al. (2016). • EBIM+TreeLSTM: The state-of-the-art model proposed by Chen et al. (2016) on the SNLI dataset.
InsuranceQA: • IR model: This model by Bendersky et al. (2010) learns the concept information to help rank the candidates. • CNN with GESD: This model by Feng et al. (2015) uses Euclidean distance and dot product between sequence representations built through convolutional neural networks to select the answer. • Attentive LSTM: Tan et al. (2016) used soft-attention mechanism to select the most important information from the candidates according to the representation of the questions. • IARNN-Occam: This model by Wang et al. (2016) adds regularization on the attention weights. • IARNN-Gate: This model by Wang et al. (2016) uses the representation of the question to build the GRU gates for each candidate answer.
WikiQA: • IARNN-Occam and IARNN-Gate as introduced before. • CNN-Cnt: This model by Yang et al. (2015) combines sentence representations built by a convolutional neural network with logistic regression. • ABCNN: This model is Attention-Based Convolutional Neural Network proposed by Yin et al. (2015). • CubeCNN proposed by He & Lin (2016) builds a CNN on all pairs of word similarity.
MovieQA: All the baselines we consider come from Tapaswi et al. (2016)’s work: • Cosine Word2Vec: A sliding window is used to select the answer according to the similarities computed
through Word2Vec between the sentences in plot and the question/answer. • Cosine TFIDF: This model is similar to the previous method but uses bag-of-word with tf-idf scores to compute similarity. • SSCB TFIDF: Instead of using the sliding window method, a convolutional neural network is built on the sentence level similarities.
3.3 ANALYSIS OF RESULTS
We use accuracy as the evaluation metric for the datasets MovieQA, InsuranceQA and SNLI, as there is only one correct answer or one label for each instance. For WikiQA, there may be multiple correct answers, so evaluation metrics we use are Mean Average Precision (MAP) and Mean Reciprocal Rank (MRR).
We observe the following from the results. (1) Overall, we can find that our general “compareaggregate” structure achieves the best performance on MovieQA, InsuranceQA, WikiQA datasets and very competitive performance on the SNLI dataset. Especially for the InsuranceQA dataset, with any comparison function we use, our model can outperform all the previous models. (2) The comparison method SUBMULT+NN is the best in general. (3) Some simple comparison functions can achieve better performance than the neural networks or neural tensor network comparison functions. For example, the simplest comparison function EUCCOS achieves nearly the best performance in the MovieQA dataset, and the element-wise comparison functions, which do not need parameters can achieve the best performance on the WikiQA dataset. (4) We find the preprocessing layer and the attention layer for word selection to be important in the “compare-aggregate” structure through the experiments of removing these two layers separately. We also see that for sequence matching with big difference in length, such as the MovieQA and InsuranceQA tasks, the attention layer plays a more important role. For sequence matching with smaller difference in length, such as the WikiQA and SNLI tasks, the pre-processing layer plays a more important role. (5) For the MovieQA, InsuranceQA and WikiQA tasks, our preprocessing layer is order-insensitive so that it will not take the context information into consideration during the comparison, but our model can still outperform the previous work with order-sensitive preprocessing layer. With this finding, we believe the word-by-word comparison part plays a very important role in these tasks. We will further explore the preprocessing layer in the future.
3.4 FURTHER ANALYSES
To further explain how our model works, we visualize the max values in each dimension of the convolutional layer. We use two examples shown in Table 1 from MovieQA and InsuranceQA datasets respectively. In the top of Figure 2, we can see that the plot words that also appear in either the question or the answer will draw more attention by the CNN. We hypothesize that if the nearby words in the plot can match both the words in question and the words in one answer, then this answer is more likely to be the correct one. Similarly, the bottom one of Figure 2 also shows that the CNN will focus more on the matched word representations. If the words in one answer continuously match the words in the question, this answer is more likely to be the correct one.
4 RELATED WORK
We review related work in three types of general structures for matching sequences.
Siamense network: These kinds of models use the same structure, such as RNN or CNN, to build the representations for the sequences separately and then use them for classification. Then cosine similarity (Feng et al., 2015; Yang et al., 2015), element-wise operation (Tai et al., 2015; Mou et al., 2016) or neural network-based combination Bowman et al. (2015) are used for sequence matching.
Attentive network: Soft-attention mechanism (Bahdanau et al., 2014; Luong et al., 2015) has been widely used for sequence matching in machine comprehension (Hermann et al., 2015), text entailment (Rocktäschel et al., 2015) and question answering (Tan et al., 2016). Instead of using the final state of RNN to represent a sequence, these studies use weighted sum of all the states for the sequence representation.
Compare-Aggregate network: This kind of framework is to perform the word level matching (Wang & Jiang, 2016a; Parikh et al., 2016; He & Lin, 2016; Trischler et al., 2016; Wan et al.,
2016). Our work is under this framework. But our structure is different from previous models and our model can be applied on different tasks. Besides, we analyzed different word-level comparison functions separately.
5 CONCLUSIONS
In this paper, we systematically analyzed the effectiveness of a “compare-aggregate” model on four different datasets representing different tasks. Moreover, we compared and tested different kinds of word-level comparison functions and found that some element-wise comparison functions can outperform the others. According to our experiment results, many different tasks can share the same “compare-aggregate” structure. In the future work, we would like to test its effectiveness on multi-task learning.
6 ACKNOWLEDGMENTS
This research is supported by the National Research Foundation, Prime Ministers Office, Singapore under its International Research Centres in Singapore Funding Initiative. | 1. What is the focus of the paper regarding natural language understanding problems?
2. What are the strengths of the proposed approach, particularly in its ability to work well and its generality?
3. What are the weaknesses of the model, such as its sensitivity to word order?
4. Are there any concerns regarding the attention strategy used, its similarity to previous works, and its potential for overfitting?
5. Do you have any questions about the implementation of the model, such as the choice of parameters for preprocessing the question and answer? | Review | Review
The paper presents a general approach to modeling for natural language understanding problems with two distinct textual inputs (such as a question and a source text) that can be aligned in some way. In the approach, soft attention is first used to derive alignments between the tokens of the two texts, then a comparison function uses the resulting alignments (represented as pairs of attention queries and attention results) to derive a representations that are aggregated by CNN into a single vector from which an output can be computed. The paper both presents this as an overall modeling strategy that can be made to work quite well, and offers a detailed empirical analysis of the comparison component of the model.
This work is timely. Language understanding problems of this kind are a major open issue in NLP, and are just at the threshold of being addressable with representation learning methods. The work presents a general approach which is straightforward and reasonable, and shows that it can yield good results. The work borders on incremental (relative to their earlier work or that of Parikh et al.), but it contributes in enough substantial ways that I'd strongly recommend acceptance.
Detail:
- The model, at least as implemented for the problems with longer sequences (everything but SNLI), is not sensitive to word order. It is empirically competitive, but this insensitivity places a strong upper bound on its performance. The paper does make this clear, but it seems salient enough to warrant a brief mention in the introduction or discussion sections.
- If I understand correctly, your attention strategy is based more closely on the general/bilinear strategy of Luong et al. '15 than it is on the earlier Bahdanau work. You should probably cite the former (or some other more directly relevant reference for that strategy).
- Since the NTN risks overfitting because of its large number of parameters, did you try using a version with input dimension l and a smaller output dimension m (so an l*l*m tensor)?
- You should probably note that SubMultNN looks a lot like the strategy for *sentence*-level matching in the Lili Mou paper you cite.
- Is there a reason you use the same parameters for preprocessing the question and answer in (1)? These could require different things to be weighted highly. |
ICLR | Title
Performance Disparities Between Accents in Automatic Speech Recognition
Abstract
Automatic speech recognition (ASR) services are ubiquitous. Past research has identified discriminatory ASR performance as a function of racial group and nationality. In this paper, we expand the discussion by performing an audit of some of the most popular English language ASR services using a large and global data set of speech from The Speech Accent Archive. We show that, even when controlling for multiple linguistic covariates, ASR service performance has a statistically significant relationship to the political alignment of the speaker’s birth country with respect to the United States’ geopolitical power. We discuss this bias in the context of the historical use of language to maintain global and political power.
1 INTRODUCTION
Automatic speech recognition (ASR) services are a key component of the vision for the future of human-computer interaction. However, many users are familiar with the frustrating experience of repeatedly not being understood by their voice assistant (Harwell, 2018), so much so that frustration with ASR has become a culturally-shared source of comedy (Connell & Florence, 2015; Mitchell, 2018).
Bias auditing of ASR services has quantified these experiences. English language ASR has higher error rates: for Black Americans compared to white Americans (Koenecke et al., 2020; Tatman & Kasten, 2017); for Scottish speakers compared to speakers from California and New Zealand (Tatman, 2017); and for speakers who self-identify as having Indian accents compared to speakers who self-identify as having American accents (Meyer et al., 2020). It should go without saying, but everyone has an accent – there is no “unaccented” version of English (Lippi-Green, 2012). Due to colonization and globalization, different Englishes are spoken around the world. While some English accents may be favored by those with class, race, and national origin privilege, there is no technical barrier to building an ASR system which works well on any particular accent. So we are left with the question, why does ASR performance vary as it does as a function of the global English accent spoken? This paper attempts to address this question quantitatively using a large public data set, The Speech Accent Archive (Weinberger, 2015), which is larger in number of speakers (2,713), number of first languages (212), and number of birth countries (171) than other data sets previously used to audit ASR services, and thus allows us to answer richer questions about ASR biases. Further, by observing historical patterns in how language has shifted power. our paper provides a means for readers to understand how ASR may be operating today.
Historically, accent and language have been used as a tool of colonialism and a justification of oppression. Colonial power, originally British and then of its former colonies, used English as a tool to “civilize” their colonized subjects (Kachru, 1986), and their accents to justify their lower status. English as a lingua franca today provides power to those for whom English is a first language. People around the world are compelled to promote English language learning in education systems in order to avail themselves of the privilege it can provide in the globalized economy (Price, 2014). This spread of English language may be “reproducing older forms of imperial political, economic, and cultural dominance”, but it also exacerbates inequality along neoliberal political economic lines (Price, 2014). In short, the dominance of the English language around the world shifts power in ways that exacerbate inequality.
Further, English is and has historically been used as a nationalist tool in the United States to justify white conservative fears that immigrants pose an economic and political threat to them and has been
used to enforce the cultural assimilation of immigrants (Lippi-Green, 2012). We note that, even within the United States, “Standard American English” is a theoretical concept divorced from the reality of wide variations in spoken English across geographical areas, race and ethnicity, age, class, and gender (Lippi-Green, 2012). As stated in a resolution of the Conference on College Composition and Communication in 1972, “The claim that any one dialect is unacceptable amounts to an attempt of one social group to exert its dominance over another” (Lippi-Green, 2012). The social construct of language has real and significant consequences Nee et al. (2022), for example, allowing people with accents to be passed over for hiring in the United States, despite the Civil Rights Act prohibiting discrimination based on national origin (Matsuda, 1991). Accent-based discrimination can take many forms — people with accents deemed foreign are rated as less intelligent, loyal, and influential (Lawrence, 2021). Systems based on ASR automatically enforce the requirement that one code switch or assimilate in order to be understood, rejecting the “communicative burden” in which two people will “find a communicative middle ground and foster mutual intelligibility when they are motivated, socially and psychologically, to do so” (Lippi-Green, 2012). By design, then, ASR services operate like people who reject their communicative burden, which Lippi-Green reports is often due to their “negative social evaluation of the accent in question” (Lippi-Green, 2012). As Halcyon Lawrence reports from experience as a speaker of Caribbean English, “to create conditions where accent choice is not negotiable by the speaker is hostile; to impose an accent upon another is violent” (Lawrence, 2021).
Furthermore, we are concerned about discriminatory performance of ASR services because of its potential to create a class of people who are unable to use voice assistants, smart devices, and automatic transcription services. If technologists decide that the only user interface for a smart device will be via voice, a person who is unable to be accurately recognized will be unable to use the device at all. As such, ASR technologies have the potential to create a new disability, similar to how print technologies created the print disability “which unites disparate individuals who cannot read printed materials” (Whittaker et al., 2019). The biased performance of ASR, if combined with an assumption that ASR works for everyone, creates a dangerous situation in which those with particular English language accents may find themselves unable to obtain ASR service.
The consequences for someone lacking the ability to obtain reliable ASR may range from inconvenient to dangerous. Serious medical errors may result from incorrect transcription of physician’s notes Zhou et al. (2018), which are increasingly transcribed by ASR. There is, currently, an alarmingly high rate of transcription errors that could result in significant patient consequences, according to physicians who use ASR Goss et al. (2019). Other ASR users could potentially see increased danger: for example for smart wearables that users can use to call for help in an emergency Mrozek et al. (2021); or if one must repeat oneself multiple times when using a voice-controlled navigation system while driving a vehicle (and thus are distracted while driving); or if an ASR is one’s only means for controlling one’s robotic wheelchair Venkatesan et al. (2021).
Given that English language speakers have a multitude of dialects across the world, it is important to consider the ability of English language ASR services to accurately transcribe the speech of their global users. Given past research results (Tatman, 2017; Meyer et al., 2020) and the United States headquarters of Amazon, Google, and Microsoft, we hypothesize that ASR services will transcribe with less error for people who were born in the United States and whose first language is English. We hypothesize that performance of ASR systems is related to the age of onset, the age at which a person first started speaking English, which is known to be highly correlated with perceived accent (Flege et al., 1995; Moyer, 2007; Dollmann et al., 2019). But beyond this, based on the nationalist and neoliberal ways in which language is used to reinforce power, we hypothesize that ASR performance can be explained in part by the power relationship between the United States and speakers’ birth countries. That is, for the same age of onset of English and other related covariates among speakers not born in the United States, we expect that speakers born in countries which are political allies of the United States will have ASR performance that is significantly better than those born in nations which are not aligned politically with the United States. This paper tests and validates these hypotheses by utilizing a data set with significantly more speakers, across a large number of first languages and birth countries, than those which have previously been used for the evaluation of English ASR services.
2 RELATED WORK
2.1 AUDITS OF AUTOMATIC SPEECH RECOGNITION
Our work builds on a small but impactful body of literature investigating the disparities in performance of commercially available English ASR services.
Gender has been inconsistently associated with English ASR performance, significantly better for male speakers over female speakers (Tatman, 2017), female speakers over male speakers (Koenecke et al., 2020; Goldwater et al., 2008), or with no significant performance difference (Tatman & Kasten, 2017; Meyer et al., 2020).
There has been evidence that race and geographic background (especially as it relates to accent and dialect) has impact on ASR performance. Speakers from Scotland were found to have worse English ASR performance than speakers from New Zealand and the United States (Tatman, 2017), while speakers who self-identified as speaking with Indian English accents had transcriptions with higher error rates versus speakers who self-identified as speaking with US English accents (Meyer et al., 2020). Finally, ASR services consistently underperform for Black speakers in comparison to white speakers (Tatman & Kasten, 2017; Koenecke et al., 2020).
Significant research has addressed how to make ASR more robust to accent Liu et al. (2022), e.g., by training accent-based modifications to particular layers of a single model; mapping between the phones of two different accents; or using an adversarial network to separate accent-invariant and -variant features. Unlabelled clustering may be used to find accents that are under-represented; oversampling them can then improve performance Dheram et al. (2022). Our work is to audit rather than to repair ASR disparities.
Multiple aspects of spoken English are affected by the particular accent of the speaker, including both: a) how words are pronounced (Lippi-Green, 2012), and b) what words are used and how sentences are structured. By studying unstructured speech, researchers obtain a view of the discrimination experienced by speakers in transcription accuracy as a function of multiple aspects of accent (Koenecke et al., 2020). This paper presents results from a data set that controls for word choice and sentence structure (Weinberger, 2015) so that we can focus on the impact of word pronunciation on ASR performance.
2.2 HOW LANGUAGE IS USED TO CONTROL AND DIVIDE
Language, and in many cases specifically English, has a history of being “standardized” by those in power as a medium through which to exert influence Nee et al. (2022). Examples range from English being used to rank and hierarchize those deemed “other” in the United States (Lippi-Green, 2012) to the deliberate introduction of variations of English in India during British colonization in order to maintain societal hierarchies and divisions (Naregal, 2001). We argue the impact of ASR systems is towards more standardization of the English language, which is part of a history of how standardization of language has been a tool to maintain power.
2.3 HOW TECHNOLOGY AND AI SHIFT POWER
Auditing ASR services is just one step in building an understanding of how artificial intelligence (AI) has the potential to either consolidate or shift power in society, both on a local and global scale. Whether it be the feedback loops we observe in predictive policing (Ensign et al., 2018; Richardson et al., 2019), the involvement (and experimentation) of Cambridge Analytica in Kenyan elections (Nyabola, 2018), or the significant portion of Amazon Mechanical Turk crowdwork performed by workers in India (Ross et al., 2010; Difallah et al., 2018), all of these phenomena are a part of the coloniality of power:
[T]he coloniality of power can be observed in digital structures in the form of socio-cultural imaginations, knowledge systems and ways of developing and using technology which are based on systems, institutions, and values which persist from the past and remain unquestioned in the present (Mohamed et al., 2020).
Through our audit of English ASR services (with recordings from speakers born in many different countries) in combination with a historical analysis of the ways in which language and power have been closely intertwined, we both bring attention to and report evidence demonstrating the coloniality of ASR as it exists today.
3 MATERIALS AND METHODS
In this section, we describe our data and procedures for our quantitative study of ASR. To stay relevant with the published research on ASR bias, we select ASR services from the five evaluated in Koenecke et al. (2020). The top three performing ASR services in their extensive tests were Google, Amazon, and Microsoft. All three companies are notable not just as cloud service providers, but in the consumer product space in which their ASR services are implemented as part of their own devices. Recordings were transcribed using the three companies’ respective speech-to-text APIs in 2021.
3.1 WORD INFORMATION LOST (WIL)
To evaluate the correctness of the ASR service transcriptions against the elicitation paragraph that speakers read, we use a metric specifically designed for the assessment of ASR known as word information lost (WIL) (Morris, 2002; Morris et al., 2004). WIL is derived from an information theoretic measure of the mutual information between two sources. In short, for our case, it is a distance between the elicitation paragraph and the transcription for a speaker. The WIL is given by:
WIL = 1− H 2
(H + S +D)(H + S + I) , (1)
where H is the number of hits, D is the number of deletions, I is the number of insertions, and S is the number of substitutions between the elicitation paragraph and the transcription.
Compared to another commonly used metric, word error rate (WER), WIL offers distinct advantages:
1. WIL is defined from 0 (all information preserved) to 1 (no information preserved), whereas WER is similarly lower bounded by 0 but has no upper bound.
2. WIL is symmetric between deletions and insertions, unlike WER, which, especially at high error rates, weights insertions more than deletions (Morris, 2002; Morris et al., 2004).
3. The inaccuracies of WER are more severe at higher vs. lower error rates (Morris, 2002), which can be problematic in linear regression studies.
Particularly in the context of our transcription task and the resulting analyses, the advantages of WIL make it the better metric by which to compare ASR performance.
3.2 SPEECH ACCENT ARCHIVE
Our recordings come from The Speech Accent Archive, a collection of recordings of speakers born across the world and with different first languages all reading the same text (Weinberger, 2015). Full details on the methodology used in the recording collection and processing are available in Section B in the Appendix. After answering demographic questions, speakers were presented with the elicitation paragraph in Section 3.2.1 and allowed to ask questions about words they did not understand before reading the paragraph once for the recording.
3.2.1 ELICITATION PARAGRAPH
The elicitation paragraph below was crafted by linguists to include many of the sounds and most of the consonants, vowels, and clusters that are common to English (Weinberger, 2015).
Please call Stella. Ask her to bring these things with her from the store: Six spoons of fresh snow peas, five thick slabs of blue cheese, and maybe a snack for her brother Bob. We also need a small plastic snake and a big toy frog for the
kids. She can scoop these things into three red bags, and we will go meet her Wednesday at the train station.
The methodology for recording, demographic information collected, and careful construction of the elicitation paragraph means that The Speech Accent Archive contains information particularly well-suited to analyzing how English ASR services perform across a global population.
Further, the use of a constant text allows us to produce results that control for particular aspects of accent. Since all speakers read the same paragraph, any disparity in ASR performance will not be a result of word choice or sentence structure or length — heterogeneities in these may complicate ASR disparity analysis Liu et al. (2022). We can use this to narrow in on ASR disparities that result from the manner of speaking the same words across different English language accents.
3.2.2 SPEAKER INFORMATION COLLECTED
The information on speakers collected at the time of recording includes their age, sex (recorded, unfortunately, as a single binary male/female variable), country of birth, first language, age of onset of English speaking, whether they had lived in an Englishs-speaking country, and if so, for how long, and whether the speaker’s English learning environment was academic or naturalistic. Age of onset is particularly useful, as it has been shown to be correlated with perceived accent (Flege et al., 1995; Moyer, 2007; Dollmann et al., 2019). This speaker-level information is integrated into the regression performed in Section 4.2.
3.2.3 DATA DESCRIPTION
The data set includes 2,713 speakers with an average age of 32.6 years, and an average age of onset of English speaking of 8.9. The speakers represent 212 first languages across 171 birth countries. Figure 2 lists the top ten first languages represented in our data set by the number of speakers.
We note that at the time of recording, 2,023 (74.6%) speakers were either current or previous residents of the United States. By default, most ASR services that would be used on and by these speakers while they are in the United States would likely be configured to use the United States English dialect for transcription. For some of our results, we also use this dialect as the default. In addition, in Section 4.3, we conduct analyses using the “best of” all transcription service dialect settings, and show that the results are primarily the same.
4 RESULTS
4.1 GROUP-LEVEL ANALYSIS
In Figure 1, we compare WIL across ASR services grouped by whether a speaker’s first language was English. While overall performance differs between services with Microsoft performing best followed by Amazon and then Google, all services performed significantly better (P < 0.001) for speakers whose first languages was English. On average across all services, WIL was 0.14 lower for first language English speakers. By service, the size disparities followed overall performance, with a difference of 0.17, 0.14, and 0.10 for Google, Amazon, and Microsoft, respectively.
In Figure 2, we highlight mean ASR performance for the ten first languages for which we have the most data. The order of performance found in Figure 1 is maintained across services — across all ten first languages, Microsoft performs the best, followed by Amazon, and then Google. We find that all the services perform best for those whose first language is English, followed by Dutch and German. The worst performance is on speakers whose first languages are Mandarin and Spanish.
4.2 SPEAKER-LEVEL REGRESSION
Motivated by the results in Section 4.1, we construct a linear regression to understand what factors have a significant effect on the performance of ASR services. As discussed in Section 1, the way a person speaks English has and continues to be a basis for discrimination by those in power, and so we include covariates to understand how this discrimination may transfer to ASR services. We want to know if ASR performance is correlated with how the speaker is perceived from a lens of United States global political power. As a broad single measure for this political power, we encode if the speaker’s birth country is a part of the North Atlantic Treaty Organization (NATO) as of January 2022.
Specifically, we include the following covariates for each speaker: age; age of onset of English speaking; sex; English learning environment; if their first language is Germanic (as a measure of first language similarity to English, with the list of Germanic languages from Glottolog (Hammarström et al., 2021)); if their birth country is a part of NATO; if they have ever lived in an English-speaking country, and if so (as a nested variable), for how long. We also create nested covariates for English and the United States in the Germanic first language and birth country in NATO covariates respectively to separate the effects of English and the United States specifically.
In order to satisfy the assumptions for linear regression, in particular the normality of the residuals, we perform a square root transform on our response variable, WIL. The diagnostic plots for the regression assumptions can be found in Section C in the Appendix.
4.2.1 REGRESSION RESULTS
The results of the regression are shown in Table 1 in Section A of the Appendix under the headings Amazon, Google, and Microsoft. We find multiple covariates that have a significant effect across all three services (P < 0.05). Highlights of these findings include:
• WIL increases with a later age of onset of English speaking. As described in Section 3.2, age of onset is correlated with perceived accent.
• WIL decreases with speaking a Germanic first language, having controlled for the effect on WIL of English as a first language, which is also significant.
• Having lived in an English-speaking country has a negative effect on WIL, as does the number of years spent living an English-speaking country.
• Finally, being born in a country that is a part of NATO but is not the United States is associated with a lower WIL.
The final result suggests that a person’s birth in a country proximate to the United States’ geopolitical power is related to how ASR services perform on their speech.
Some covariates are only significant for certain services - ASR services from Amazon and Microsoft perform significantly worse on males than females. Google and Microsoft perform significantly better for those born in the United States, while Amazon and Google perform significantly better for those who learned English in a naturalistic environment rather than an academic one.
4.3 TRANSCRIPTIONS USING OTHER ENGLISH SETTINGS
As explained in Section 3.2.3, a majority (74.6%) of the speakers in the data set were or had been residents of the United States at the time of recording. Thus, we used the United States English setting of all of the ASR services, as this was likely the settings which would be used on or by them.
However, ASR services do offer more settings for English. It is reasonable to ask how much using all of the English language settings available for a transcription service could improve WIL. We decided to understand this question for the service with the worst overall performance and largest disparity in performance, Google, as shown in Figure 1.
We transcribed the recordings using all available English settings that Google supported. Specifically, we try these English settings on Google’s ASR service: Australia, Canada, Ghana, Hong Kong, India, Ireland, Kenya, New Zealand, Nigeria, Pakistan, Philippines, Singapore, South Africa, Tanzania, United Kingdom and the United States. To give Google the best opportunity for improvement, for each speaker we took the lowest WIL across all settings’ transcriptions. Note that while this is guaranteed to offer the largest improvement, it is unrealistic to do in practice, since it requires knowledge of the ground-truth transcript. We refer to this as Google All Settings.
A comparison to Google’s original performance is offered in Figure 1, where Google refers to transcriptions generated only using the United States setting, and Google All Settings refers to WIL generated using the method above. Originally, we saw that for Google, first language English speakers had a WIL that was on average 0.17 lower than first language non-English speakers. When using the technique for Google All Settings on both first language English and non-English speakers, we notice that a disparity of a similar size (0.14, P < 0.001) still exists. In fact, even when we only use the United States English setting for speakers with a first language of English and allow first language non-English speakers to take the lowest WIL from all settings, the disparity is still a considerable 0.10 (P < 0.001).
We also note that the significance of the factors in the linear regression did not change when compared to Google’s original transcription performance. This result is displayed in Table 1 in Section A of the Appendix under the column Google All Settings. This suggests that even Google’s attempts to adapt their technology to different global settings are subject to the same biases we highlighted originally.
5 DISCUSSION
Across all three ASR services tested (Amazon, Google, and Microsoft), we find significant disparities in performance between those whose first language is English and those whose first language is not English. Moreover, we find that these disparities are connected not only to the age at which an individual began speaking English, the environment they learned it in, and whether or not their first language was Germanic, but also whether or not their birth country is a part of NATO, a representation of political alignment with the United States. When, with one of the services tested (Google), we again transcribed recordings using all of the available English language locality settings, we saw all of our significant results remain the same, implying that the current set of international English language models offered does not solve the inherent problems of bias we observe in ASR.
5.1 HISTORICAL CONTEXT
In many ways, we are not surprised by the ways in which a neoliberal capitalist power structure provides better services to a select group of English language speakers. In many ways, it parallels historical colonial use of language, first to standardize language for the benefit of those in power, and second to benefit from hierarchies in language dialects.
Many of today’s globally dominant languages were imposed violently within its nation of origin and then the colonial encounter across the world. For instance, the French language was used to turn “peasants into Frenchmen” (Weber, 1976). Within France, French standardization served the dominant class, suppressed the culture of many of its own people, and worked to discipline labor and extract greater profit. ASR systems similarly serve to standardize language — any accent deemed nonstandard is not understood. Speakers are compelled to mimic the dominant accent, which is felt as coerced or even violent (Lawrence, 2021). Providers don’t wait to ensure that their ASR product works well across dialects before deploying it – that would break the “move fast and break things” rule and allow competitors to establish market dominance Hicks (2021).
Second, the notion of a “standard” language dialect enables social disparity. Education policy during the US occupation of the Philippines enforced racialization; English language lessons were used to emphasize the inferior accent, hygiene and cleanliness of the lower status “Oriental” students (McElhinny & Heller, 2020). In India, British education policies deliberately taught different English dialects to different castes to grant cultural and social authority to a class of elites that aided British governance. Dominant castes entered English medium schools and were tutored by people from England (Gauba, 1974), and were expected to “refine” the “vernacular” languages and teach those to other Indians. They did so while securing their own privileged access to colonial English (Chandra, 2012) and the state power that came with it as the favoured language of governance. The elite monopolized the “correct” language by denying it to other Indians, and thereby made the language selective and exclusive.
ASR services are similarly exclusive; literally providing less control of systems to those with disfavoured accents, and less capability for those in professions which use ASR transcription, e.g., medical professionals. Exclusivity serves an important economic purpose by allowing providers to market beyond the commodity that ASR really is. In this case, it enables marketing of the identity people can have by using it, including the identity of speaking “correct” (white U.S.) English.
5.2 FORESIGHT FOR FUTURE ASR DEVELOPMENT
We provide one example of how the lessons from the historical use of language for dominance may apply to discussions of how ASR will evolve in the future. Academic researchers and industry service providers have made claims that the problems are temporary. For example, Amazon claims “As more people speak to Alexa, and with various accents, Alexa’s understanding will improve” (Harwell, 2018). As another example, researchers state about the ASR racial performance gap: “The likely cause of this shortcoming is insufficient audio data from black speakers when training the models” (Koenecke et al., 2020).
However, historical hindsight has not indicated that ASR services will improve over time without deeper structural changes. First, by selling products that work for one accent above others, technology companies make speakers of other accents less likely to use their products. Members of groups
historically subject to disproportionate state surveillance may be more hesitant to consent to contribute data towards AI technologies (Jo & Gebru, 2020). Both problems operate as a feedback loop to keep disfavored speakers out of future ASR training data. Further, ML algorithms may naturally tend to discriminate against a smaller group in the population because sacrificing performance on that group may allow reducing average “cost” on the population as a whole, even if the training data represents them in proportion to their population (Zou & Schiebinger, 2018). Finally, ground truth labelling is likely to be less accurate for members a disfavored group, and incorrect labels will be fed back into training of future systems (Denton, 2019). Whittaker et al. describe an example in computer vision for autonomous vehicles — the more video of people in wheelchairs used in training, the less likely it was to label a person backing their wheelchair across the street at a crosswalk as a person (Whittaker et al., 2019). Further, in speaker verification, Hutiri and Ding lay out how multiple layers of bias contribute to performance disparities Hutiri & Ding (2022). Instead, historical hindsight indicates that the problem in ASR is more systematic, related to the more fundamental nature of the use of standardized language to divide and provide the benefits of control.
In short, the techno-optimist idea that ASR accent bias will resolve itself in time is unconvincing. Similar to how equity and justice should be centered in each layer of linguistic structure for equitable NLP design Nee et al. (2022), active work will be required to design ASR services that repair damage caused by colonial and post-colonial uses of language and accent to discriminate.
5.3 CONCLUSION
This paper extends the results reported in prior English language ASR performance audits. In part, we provide an audit of ASR using a much larger data set containing speech from a large number of countries of birth as well as a large number of first languages. The quantitative results show how ASR services perform on speakers whose first language is English vs. those for which it is not, and how ASR services perform compared to each other. More critically, we find that, controlling for several related covariates about first language, all ASR services perform significantly worse if the speaker was born outside of a NATO country; in effect, in a country further from United States geopolitical power. We argue that this has historical parallel in the ways in which language has been used historically to maintain global power. By explaining these parallels, and by providing quantitative evidence for the effect, we hope that researchers and developers hoping to reduce disparities in ASR services will be better able to identify the systematic nature of the problems.
6 ETHICS STATEMENT
While the creation and continued upkeep of The Speech Accent Archive does not fall within the scope of this work, we note that all subjects did sign an informed consent form before being recorded, available at https://accent.gmu.edu/pdfs/consent.pdf, and that all data used in this work was anonymized.
It is important to recognize that the data set used in this study overrepresents some groups/backgrounds and underrepresents others, while also not including information on other potential influencing factors such as socioeconomic status. One area of focus for future data collection could be speakers who do not currently reside in the United States.
7 REPRODUCIBILITY STATEMENT
The recordings and associated demographic data used in this experiment are available upon request from the maintainers of The Speech Accent Archive (Weinberger, 2015), or at https://accent. gmu.edu/. Section 3 describes the data set and error metric used in our analysis, while Section B in the Appendix describes the data pipeline, including the steps of recording collection, submission to ASR services, transcript processing, and computation of the error rate. The scripts used in the cleaning and analysis of the data will be hosted on GitHub upon publication of the paper.
A REGRESSION TABLE
Table 1: Speaker-level regression
Dependent Variable: Square Root of Word Information Lost (WIL)
Amazon Google Microsoft Google All Settings Age at Time of Recording 0.001∗ 0.002∗ 0.001∗ 0.001∗
(0.0003) (0.0004) (0.0004) (0.0004)
Age of Onset of English Speaking 0.006∗ 0.004∗ 0.006∗ 0.005∗ (0.0005) (0.001) (0.001) (0.001)
Male 0.015∗ 0.001 0.022∗ 0.005 (0.005) (0.007) (0.006) (0.006)
Naturalistic Learning Environment −0.028∗ −0.042∗ −0.018 −0.029∗ (0.009) (0.012) (0.010) (0.010)
Unknown Learning Environment −0.135 −0.007 −0.061 −0.008 (0.081) (0.110) (0.095) (0.093)
Germanic First Language −0.074∗ −0.082∗ −0.074∗ −0.071∗ (0.012) (0.017) (0.014) (0.014)
First Language English 0.046∗ 0.124∗ 0.058∗ 0.084∗ [Germanic First Language] (0.017) (0.024) (0.020) (0.020)
Birth Country in NATO −0.060∗ −0.041∗ −0.063∗ −0.040∗ (0.007) (0.010) (0.009) (0.008)
Birth Country USA −0.003 −0.135∗ −0.027∗ −0.076∗ [Birth Country in NATO] (0.012) (0.016) (0.014) (0.013)
Lived in English-Speaking Country −0.035∗ −0.053∗ −0.044∗ −0.057∗ (0.009) (0.012) (0.010) (0.010)
Years in English-Speaking Country −0.002∗ −0.002∗ −0.001∗ −0.002∗ [Lived in English-Speaking Country] (0.0003) (0.0005) (0.0004) (0.0004)
Intercept 0.400∗ 0.496∗ 0.283∗ 0.454∗ (0.012) (0.016) (0.014) (0.013)
Observations 2,713 2,713 2,713 2,713 R2 0.332 0.258 0.266 0.274 Adjusted R2 0.329 0.255 0.263 0.271 Residual Std. Error (df = 2,701) 0.139 0.190 0.164 0.160 F Statistic (df = 11; 2,701) 121.930∗ 85.259∗ 88.773∗ 92.863∗
Reference Classes: Female & Academic Learning Environment ∗P < 0.05
B SPEECH ACCENT ARCHIVE RECORDING AND PROCESSING
B.1 DATA COLLECTION
The following information about the data collection process comes from The Speech Accent Archive (Weinberger, 2015).
Subjects were sat 8-10 inches from the microphone and recorded individually in a quiet room. They were each asked the following questions:
• Where were you born? • What is your native language? • What other languages besides English and your native language do you know? • How old are you? • How old were you when you first began to study English? • How did you learn English (academically or naturalistically)? • How long have you lived in an English-speaking country? Which country?
Subjects were asked to look over the elicitation paragraph and ask questions about any unfamiliar words. Finally, they read the passage once into a high-quality recording device.
B.2 DATA PROCESSING
All recordings were initially converted into the mp3 file format and then subsequently converted into the formats necessary for transcription by each of the respective services. This was done to help control any effects which might arise from files being originally recorded in lossy instead of lossless formats.
Audio files were submitted to the respective APIs for all three service providers and the returned transcripts were then concatenated into a single string for each speaker. Across all services, only once did a service fail to return a transcription, and this occurred only for a specific triplet of service, speaker, and transcription dialect. Transcripts were then cleaned using the following process:
1. Semicolons were converted to spaces. 2. Characters were converted to lowercase. 3. Hyphens and forward and back slashes were replaced with spaces. 4. All currency symbols, ampersands, equals signs, octothorpes, and percent signs were sep-
arated by spaces on both sides. 5. The string was split on spaces to create words. 6. Punctuation at the beginning and end of words was replaced with spaces. 7. Leading and trailing spaces were stripped. 8. Words that were only spaces were deleted. 9. Words exactly equal to the characters “3”, “5”, and “6” were converted to “three”, “five”,
and “six”, since these exact numbers appear in the elicitation paragraph as written in Section 3.2.1 and would be correct transcriptions.
10. Spaces were added back in between words and recombined into one string.
After putting all transcripts and the elicitation paragraph through this process, WIL was calculated using the jiwer Python package (Morris et al., 2004).
C CHECKING REGRESSION ASSUMPTIONS
Before looking at the results of our regression, we evaluate the regression assumptions via diagnostic plots in Figures 3, 4, 5, and 6. Due to the square root transform which we performed on WIL (our response variable) in Section 4.2, the diagnostic plots show that our regression assumptions are satisfied, although there are some outliers to investigate. We analyzed each labelled outlier from the plots by hand, first by checking the speaker data to make sure there are no anomalies, and then by listening to the recording to ensure there are no audio issues. Having done this, we proceed to interpret the results of the regression as described in Table 1. | 1. What are the main contributions and strengths of the paper regarding its critical examination of ASR accent bias?
2. What are the weaknesses or limitations of the paper, particularly in terms of its hypotheses and omissions in the related work section?
3. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
In this work, the authors dissect the problem of discriminatory Automatic Speech Recognition (ASR) performance on accented speech along the following three dimensions:
By evaluating predominant ASR services on a large and global dataset of English speech in varying accents.
By identifying speaker covariates in a linear regression model that significantly impact ASR performance.
By examining the quantitative results within the larger context of the role of language in exacerbating inequality and making ASR services more exclusive.
Strengths And Weaknesses
The main strength of this work is that it takes a thorough critical look at ASR accent bias. The authors evaluate three popular ASR services by Microsoft, Google and Amazon on accented English speech from The Speech Accent Archive representing 212 first languages across 171 birth countries. They find that ASR performance is significantly better for speakers whose first language was English. While such observations have been previously documented, this work also examines a number of speaker covariates like age of onset of English speaking, environment where English was learned (naturalistic vs. academic), etc. These covariates were found to have a significant effect on ASR performance across all three services.
I consider the following to be the main weaknesses of this work:
The covariate referring to whether the speaker's birth country is a part of NATO was found to be associated with lower WILs. Based on this observation, the authors hypothesized that how far a speaker's birth country is from United States' geopolitical power is correlated with how ASR services perform on their speech. This hypothesis appears to be a bit tenuous. It's unclear if there are other hidden variables that might also be playing a role here. For example, the number of years the speaker lived in the US, how often the speaker interacts with native speakers of English, etc.
As the authors note, a majority of the speakers in the data set were residents of the United States at the time of recording. It would have made for a compelling story if the authors had created and released a more representative dataset of accented speech without the bias of US residency.
The related work section completely ignores the fairly large body of work that focuses on improving ASR for accented speech. For example, please refer to "Accented Speech Recognition: A Survey" by Hinsvark et al., 2021 for a recent survey. Citations to some other recent works on fairness in ASR are also missing. For example, "Model-based approach for measuring the fairness in ASR", Liu et al., 2021 and "Toward Fairness in Speech Recognition: Discovery and mitigation of performance disparities", Dheram et al., 2022.
Clarity, Quality, Novelty And Reproducibility
The paper is well-written. The empirical analysis has been conducted on a publicly available dataset and the authors promise to release their scripts to ensure reproducibility. |
ICLR | Title
Performance Disparities Between Accents in Automatic Speech Recognition
Abstract
Automatic speech recognition (ASR) services are ubiquitous. Past research has identified discriminatory ASR performance as a function of racial group and nationality. In this paper, we expand the discussion by performing an audit of some of the most popular English language ASR services using a large and global data set of speech from The Speech Accent Archive. We show that, even when controlling for multiple linguistic covariates, ASR service performance has a statistically significant relationship to the political alignment of the speaker’s birth country with respect to the United States’ geopolitical power. We discuss this bias in the context of the historical use of language to maintain global and political power.
1 INTRODUCTION
Automatic speech recognition (ASR) services are a key component of the vision for the future of human-computer interaction. However, many users are familiar with the frustrating experience of repeatedly not being understood by their voice assistant (Harwell, 2018), so much so that frustration with ASR has become a culturally-shared source of comedy (Connell & Florence, 2015; Mitchell, 2018).
Bias auditing of ASR services has quantified these experiences. English language ASR has higher error rates: for Black Americans compared to white Americans (Koenecke et al., 2020; Tatman & Kasten, 2017); for Scottish speakers compared to speakers from California and New Zealand (Tatman, 2017); and for speakers who self-identify as having Indian accents compared to speakers who self-identify as having American accents (Meyer et al., 2020). It should go without saying, but everyone has an accent – there is no “unaccented” version of English (Lippi-Green, 2012). Due to colonization and globalization, different Englishes are spoken around the world. While some English accents may be favored by those with class, race, and national origin privilege, there is no technical barrier to building an ASR system which works well on any particular accent. So we are left with the question, why does ASR performance vary as it does as a function of the global English accent spoken? This paper attempts to address this question quantitatively using a large public data set, The Speech Accent Archive (Weinberger, 2015), which is larger in number of speakers (2,713), number of first languages (212), and number of birth countries (171) than other data sets previously used to audit ASR services, and thus allows us to answer richer questions about ASR biases. Further, by observing historical patterns in how language has shifted power. our paper provides a means for readers to understand how ASR may be operating today.
Historically, accent and language have been used as a tool of colonialism and a justification of oppression. Colonial power, originally British and then of its former colonies, used English as a tool to “civilize” their colonized subjects (Kachru, 1986), and their accents to justify their lower status. English as a lingua franca today provides power to those for whom English is a first language. People around the world are compelled to promote English language learning in education systems in order to avail themselves of the privilege it can provide in the globalized economy (Price, 2014). This spread of English language may be “reproducing older forms of imperial political, economic, and cultural dominance”, but it also exacerbates inequality along neoliberal political economic lines (Price, 2014). In short, the dominance of the English language around the world shifts power in ways that exacerbate inequality.
Further, English is and has historically been used as a nationalist tool in the United States to justify white conservative fears that immigrants pose an economic and political threat to them and has been
used to enforce the cultural assimilation of immigrants (Lippi-Green, 2012). We note that, even within the United States, “Standard American English” is a theoretical concept divorced from the reality of wide variations in spoken English across geographical areas, race and ethnicity, age, class, and gender (Lippi-Green, 2012). As stated in a resolution of the Conference on College Composition and Communication in 1972, “The claim that any one dialect is unacceptable amounts to an attempt of one social group to exert its dominance over another” (Lippi-Green, 2012). The social construct of language has real and significant consequences Nee et al. (2022), for example, allowing people with accents to be passed over for hiring in the United States, despite the Civil Rights Act prohibiting discrimination based on national origin (Matsuda, 1991). Accent-based discrimination can take many forms — people with accents deemed foreign are rated as less intelligent, loyal, and influential (Lawrence, 2021). Systems based on ASR automatically enforce the requirement that one code switch or assimilate in order to be understood, rejecting the “communicative burden” in which two people will “find a communicative middle ground and foster mutual intelligibility when they are motivated, socially and psychologically, to do so” (Lippi-Green, 2012). By design, then, ASR services operate like people who reject their communicative burden, which Lippi-Green reports is often due to their “negative social evaluation of the accent in question” (Lippi-Green, 2012). As Halcyon Lawrence reports from experience as a speaker of Caribbean English, “to create conditions where accent choice is not negotiable by the speaker is hostile; to impose an accent upon another is violent” (Lawrence, 2021).
Furthermore, we are concerned about discriminatory performance of ASR services because of its potential to create a class of people who are unable to use voice assistants, smart devices, and automatic transcription services. If technologists decide that the only user interface for a smart device will be via voice, a person who is unable to be accurately recognized will be unable to use the device at all. As such, ASR technologies have the potential to create a new disability, similar to how print technologies created the print disability “which unites disparate individuals who cannot read printed materials” (Whittaker et al., 2019). The biased performance of ASR, if combined with an assumption that ASR works for everyone, creates a dangerous situation in which those with particular English language accents may find themselves unable to obtain ASR service.
The consequences for someone lacking the ability to obtain reliable ASR may range from inconvenient to dangerous. Serious medical errors may result from incorrect transcription of physician’s notes Zhou et al. (2018), which are increasingly transcribed by ASR. There is, currently, an alarmingly high rate of transcription errors that could result in significant patient consequences, according to physicians who use ASR Goss et al. (2019). Other ASR users could potentially see increased danger: for example for smart wearables that users can use to call for help in an emergency Mrozek et al. (2021); or if one must repeat oneself multiple times when using a voice-controlled navigation system while driving a vehicle (and thus are distracted while driving); or if an ASR is one’s only means for controlling one’s robotic wheelchair Venkatesan et al. (2021).
Given that English language speakers have a multitude of dialects across the world, it is important to consider the ability of English language ASR services to accurately transcribe the speech of their global users. Given past research results (Tatman, 2017; Meyer et al., 2020) and the United States headquarters of Amazon, Google, and Microsoft, we hypothesize that ASR services will transcribe with less error for people who were born in the United States and whose first language is English. We hypothesize that performance of ASR systems is related to the age of onset, the age at which a person first started speaking English, which is known to be highly correlated with perceived accent (Flege et al., 1995; Moyer, 2007; Dollmann et al., 2019). But beyond this, based on the nationalist and neoliberal ways in which language is used to reinforce power, we hypothesize that ASR performance can be explained in part by the power relationship between the United States and speakers’ birth countries. That is, for the same age of onset of English and other related covariates among speakers not born in the United States, we expect that speakers born in countries which are political allies of the United States will have ASR performance that is significantly better than those born in nations which are not aligned politically with the United States. This paper tests and validates these hypotheses by utilizing a data set with significantly more speakers, across a large number of first languages and birth countries, than those which have previously been used for the evaluation of English ASR services.
2 RELATED WORK
2.1 AUDITS OF AUTOMATIC SPEECH RECOGNITION
Our work builds on a small but impactful body of literature investigating the disparities in performance of commercially available English ASR services.
Gender has been inconsistently associated with English ASR performance, significantly better for male speakers over female speakers (Tatman, 2017), female speakers over male speakers (Koenecke et al., 2020; Goldwater et al., 2008), or with no significant performance difference (Tatman & Kasten, 2017; Meyer et al., 2020).
There has been evidence that race and geographic background (especially as it relates to accent and dialect) has impact on ASR performance. Speakers from Scotland were found to have worse English ASR performance than speakers from New Zealand and the United States (Tatman, 2017), while speakers who self-identified as speaking with Indian English accents had transcriptions with higher error rates versus speakers who self-identified as speaking with US English accents (Meyer et al., 2020). Finally, ASR services consistently underperform for Black speakers in comparison to white speakers (Tatman & Kasten, 2017; Koenecke et al., 2020).
Significant research has addressed how to make ASR more robust to accent Liu et al. (2022), e.g., by training accent-based modifications to particular layers of a single model; mapping between the phones of two different accents; or using an adversarial network to separate accent-invariant and -variant features. Unlabelled clustering may be used to find accents that are under-represented; oversampling them can then improve performance Dheram et al. (2022). Our work is to audit rather than to repair ASR disparities.
Multiple aspects of spoken English are affected by the particular accent of the speaker, including both: a) how words are pronounced (Lippi-Green, 2012), and b) what words are used and how sentences are structured. By studying unstructured speech, researchers obtain a view of the discrimination experienced by speakers in transcription accuracy as a function of multiple aspects of accent (Koenecke et al., 2020). This paper presents results from a data set that controls for word choice and sentence structure (Weinberger, 2015) so that we can focus on the impact of word pronunciation on ASR performance.
2.2 HOW LANGUAGE IS USED TO CONTROL AND DIVIDE
Language, and in many cases specifically English, has a history of being “standardized” by those in power as a medium through which to exert influence Nee et al. (2022). Examples range from English being used to rank and hierarchize those deemed “other” in the United States (Lippi-Green, 2012) to the deliberate introduction of variations of English in India during British colonization in order to maintain societal hierarchies and divisions (Naregal, 2001). We argue the impact of ASR systems is towards more standardization of the English language, which is part of a history of how standardization of language has been a tool to maintain power.
2.3 HOW TECHNOLOGY AND AI SHIFT POWER
Auditing ASR services is just one step in building an understanding of how artificial intelligence (AI) has the potential to either consolidate or shift power in society, both on a local and global scale. Whether it be the feedback loops we observe in predictive policing (Ensign et al., 2018; Richardson et al., 2019), the involvement (and experimentation) of Cambridge Analytica in Kenyan elections (Nyabola, 2018), or the significant portion of Amazon Mechanical Turk crowdwork performed by workers in India (Ross et al., 2010; Difallah et al., 2018), all of these phenomena are a part of the coloniality of power:
[T]he coloniality of power can be observed in digital structures in the form of socio-cultural imaginations, knowledge systems and ways of developing and using technology which are based on systems, institutions, and values which persist from the past and remain unquestioned in the present (Mohamed et al., 2020).
Through our audit of English ASR services (with recordings from speakers born in many different countries) in combination with a historical analysis of the ways in which language and power have been closely intertwined, we both bring attention to and report evidence demonstrating the coloniality of ASR as it exists today.
3 MATERIALS AND METHODS
In this section, we describe our data and procedures for our quantitative study of ASR. To stay relevant with the published research on ASR bias, we select ASR services from the five evaluated in Koenecke et al. (2020). The top three performing ASR services in their extensive tests were Google, Amazon, and Microsoft. All three companies are notable not just as cloud service providers, but in the consumer product space in which their ASR services are implemented as part of their own devices. Recordings were transcribed using the three companies’ respective speech-to-text APIs in 2021.
3.1 WORD INFORMATION LOST (WIL)
To evaluate the correctness of the ASR service transcriptions against the elicitation paragraph that speakers read, we use a metric specifically designed for the assessment of ASR known as word information lost (WIL) (Morris, 2002; Morris et al., 2004). WIL is derived from an information theoretic measure of the mutual information between two sources. In short, for our case, it is a distance between the elicitation paragraph and the transcription for a speaker. The WIL is given by:
WIL = 1− H 2
(H + S +D)(H + S + I) , (1)
where H is the number of hits, D is the number of deletions, I is the number of insertions, and S is the number of substitutions between the elicitation paragraph and the transcription.
Compared to another commonly used metric, word error rate (WER), WIL offers distinct advantages:
1. WIL is defined from 0 (all information preserved) to 1 (no information preserved), whereas WER is similarly lower bounded by 0 but has no upper bound.
2. WIL is symmetric between deletions and insertions, unlike WER, which, especially at high error rates, weights insertions more than deletions (Morris, 2002; Morris et al., 2004).
3. The inaccuracies of WER are more severe at higher vs. lower error rates (Morris, 2002), which can be problematic in linear regression studies.
Particularly in the context of our transcription task and the resulting analyses, the advantages of WIL make it the better metric by which to compare ASR performance.
3.2 SPEECH ACCENT ARCHIVE
Our recordings come from The Speech Accent Archive, a collection of recordings of speakers born across the world and with different first languages all reading the same text (Weinberger, 2015). Full details on the methodology used in the recording collection and processing are available in Section B in the Appendix. After answering demographic questions, speakers were presented with the elicitation paragraph in Section 3.2.1 and allowed to ask questions about words they did not understand before reading the paragraph once for the recording.
3.2.1 ELICITATION PARAGRAPH
The elicitation paragraph below was crafted by linguists to include many of the sounds and most of the consonants, vowels, and clusters that are common to English (Weinberger, 2015).
Please call Stella. Ask her to bring these things with her from the store: Six spoons of fresh snow peas, five thick slabs of blue cheese, and maybe a snack for her brother Bob. We also need a small plastic snake and a big toy frog for the
kids. She can scoop these things into three red bags, and we will go meet her Wednesday at the train station.
The methodology for recording, demographic information collected, and careful construction of the elicitation paragraph means that The Speech Accent Archive contains information particularly well-suited to analyzing how English ASR services perform across a global population.
Further, the use of a constant text allows us to produce results that control for particular aspects of accent. Since all speakers read the same paragraph, any disparity in ASR performance will not be a result of word choice or sentence structure or length — heterogeneities in these may complicate ASR disparity analysis Liu et al. (2022). We can use this to narrow in on ASR disparities that result from the manner of speaking the same words across different English language accents.
3.2.2 SPEAKER INFORMATION COLLECTED
The information on speakers collected at the time of recording includes their age, sex (recorded, unfortunately, as a single binary male/female variable), country of birth, first language, age of onset of English speaking, whether they had lived in an Englishs-speaking country, and if so, for how long, and whether the speaker’s English learning environment was academic or naturalistic. Age of onset is particularly useful, as it has been shown to be correlated with perceived accent (Flege et al., 1995; Moyer, 2007; Dollmann et al., 2019). This speaker-level information is integrated into the regression performed in Section 4.2.
3.2.3 DATA DESCRIPTION
The data set includes 2,713 speakers with an average age of 32.6 years, and an average age of onset of English speaking of 8.9. The speakers represent 212 first languages across 171 birth countries. Figure 2 lists the top ten first languages represented in our data set by the number of speakers.
We note that at the time of recording, 2,023 (74.6%) speakers were either current or previous residents of the United States. By default, most ASR services that would be used on and by these speakers while they are in the United States would likely be configured to use the United States English dialect for transcription. For some of our results, we also use this dialect as the default. In addition, in Section 4.3, we conduct analyses using the “best of” all transcription service dialect settings, and show that the results are primarily the same.
4 RESULTS
4.1 GROUP-LEVEL ANALYSIS
In Figure 1, we compare WIL across ASR services grouped by whether a speaker’s first language was English. While overall performance differs between services with Microsoft performing best followed by Amazon and then Google, all services performed significantly better (P < 0.001) for speakers whose first languages was English. On average across all services, WIL was 0.14 lower for first language English speakers. By service, the size disparities followed overall performance, with a difference of 0.17, 0.14, and 0.10 for Google, Amazon, and Microsoft, respectively.
In Figure 2, we highlight mean ASR performance for the ten first languages for which we have the most data. The order of performance found in Figure 1 is maintained across services — across all ten first languages, Microsoft performs the best, followed by Amazon, and then Google. We find that all the services perform best for those whose first language is English, followed by Dutch and German. The worst performance is on speakers whose first languages are Mandarin and Spanish.
4.2 SPEAKER-LEVEL REGRESSION
Motivated by the results in Section 4.1, we construct a linear regression to understand what factors have a significant effect on the performance of ASR services. As discussed in Section 1, the way a person speaks English has and continues to be a basis for discrimination by those in power, and so we include covariates to understand how this discrimination may transfer to ASR services. We want to know if ASR performance is correlated with how the speaker is perceived from a lens of United States global political power. As a broad single measure for this political power, we encode if the speaker’s birth country is a part of the North Atlantic Treaty Organization (NATO) as of January 2022.
Specifically, we include the following covariates for each speaker: age; age of onset of English speaking; sex; English learning environment; if their first language is Germanic (as a measure of first language similarity to English, with the list of Germanic languages from Glottolog (Hammarström et al., 2021)); if their birth country is a part of NATO; if they have ever lived in an English-speaking country, and if so (as a nested variable), for how long. We also create nested covariates for English and the United States in the Germanic first language and birth country in NATO covariates respectively to separate the effects of English and the United States specifically.
In order to satisfy the assumptions for linear regression, in particular the normality of the residuals, we perform a square root transform on our response variable, WIL. The diagnostic plots for the regression assumptions can be found in Section C in the Appendix.
4.2.1 REGRESSION RESULTS
The results of the regression are shown in Table 1 in Section A of the Appendix under the headings Amazon, Google, and Microsoft. We find multiple covariates that have a significant effect across all three services (P < 0.05). Highlights of these findings include:
• WIL increases with a later age of onset of English speaking. As described in Section 3.2, age of onset is correlated with perceived accent.
• WIL decreases with speaking a Germanic first language, having controlled for the effect on WIL of English as a first language, which is also significant.
• Having lived in an English-speaking country has a negative effect on WIL, as does the number of years spent living an English-speaking country.
• Finally, being born in a country that is a part of NATO but is not the United States is associated with a lower WIL.
The final result suggests that a person’s birth in a country proximate to the United States’ geopolitical power is related to how ASR services perform on their speech.
Some covariates are only significant for certain services - ASR services from Amazon and Microsoft perform significantly worse on males than females. Google and Microsoft perform significantly better for those born in the United States, while Amazon and Google perform significantly better for those who learned English in a naturalistic environment rather than an academic one.
4.3 TRANSCRIPTIONS USING OTHER ENGLISH SETTINGS
As explained in Section 3.2.3, a majority (74.6%) of the speakers in the data set were or had been residents of the United States at the time of recording. Thus, we used the United States English setting of all of the ASR services, as this was likely the settings which would be used on or by them.
However, ASR services do offer more settings for English. It is reasonable to ask how much using all of the English language settings available for a transcription service could improve WIL. We decided to understand this question for the service with the worst overall performance and largest disparity in performance, Google, as shown in Figure 1.
We transcribed the recordings using all available English settings that Google supported. Specifically, we try these English settings on Google’s ASR service: Australia, Canada, Ghana, Hong Kong, India, Ireland, Kenya, New Zealand, Nigeria, Pakistan, Philippines, Singapore, South Africa, Tanzania, United Kingdom and the United States. To give Google the best opportunity for improvement, for each speaker we took the lowest WIL across all settings’ transcriptions. Note that while this is guaranteed to offer the largest improvement, it is unrealistic to do in practice, since it requires knowledge of the ground-truth transcript. We refer to this as Google All Settings.
A comparison to Google’s original performance is offered in Figure 1, where Google refers to transcriptions generated only using the United States setting, and Google All Settings refers to WIL generated using the method above. Originally, we saw that for Google, first language English speakers had a WIL that was on average 0.17 lower than first language non-English speakers. When using the technique for Google All Settings on both first language English and non-English speakers, we notice that a disparity of a similar size (0.14, P < 0.001) still exists. In fact, even when we only use the United States English setting for speakers with a first language of English and allow first language non-English speakers to take the lowest WIL from all settings, the disparity is still a considerable 0.10 (P < 0.001).
We also note that the significance of the factors in the linear regression did not change when compared to Google’s original transcription performance. This result is displayed in Table 1 in Section A of the Appendix under the column Google All Settings. This suggests that even Google’s attempts to adapt their technology to different global settings are subject to the same biases we highlighted originally.
5 DISCUSSION
Across all three ASR services tested (Amazon, Google, and Microsoft), we find significant disparities in performance between those whose first language is English and those whose first language is not English. Moreover, we find that these disparities are connected not only to the age at which an individual began speaking English, the environment they learned it in, and whether or not their first language was Germanic, but also whether or not their birth country is a part of NATO, a representation of political alignment with the United States. When, with one of the services tested (Google), we again transcribed recordings using all of the available English language locality settings, we saw all of our significant results remain the same, implying that the current set of international English language models offered does not solve the inherent problems of bias we observe in ASR.
5.1 HISTORICAL CONTEXT
In many ways, we are not surprised by the ways in which a neoliberal capitalist power structure provides better services to a select group of English language speakers. In many ways, it parallels historical colonial use of language, first to standardize language for the benefit of those in power, and second to benefit from hierarchies in language dialects.
Many of today’s globally dominant languages were imposed violently within its nation of origin and then the colonial encounter across the world. For instance, the French language was used to turn “peasants into Frenchmen” (Weber, 1976). Within France, French standardization served the dominant class, suppressed the culture of many of its own people, and worked to discipline labor and extract greater profit. ASR systems similarly serve to standardize language — any accent deemed nonstandard is not understood. Speakers are compelled to mimic the dominant accent, which is felt as coerced or even violent (Lawrence, 2021). Providers don’t wait to ensure that their ASR product works well across dialects before deploying it – that would break the “move fast and break things” rule and allow competitors to establish market dominance Hicks (2021).
Second, the notion of a “standard” language dialect enables social disparity. Education policy during the US occupation of the Philippines enforced racialization; English language lessons were used to emphasize the inferior accent, hygiene and cleanliness of the lower status “Oriental” students (McElhinny & Heller, 2020). In India, British education policies deliberately taught different English dialects to different castes to grant cultural and social authority to a class of elites that aided British governance. Dominant castes entered English medium schools and were tutored by people from England (Gauba, 1974), and were expected to “refine” the “vernacular” languages and teach those to other Indians. They did so while securing their own privileged access to colonial English (Chandra, 2012) and the state power that came with it as the favoured language of governance. The elite monopolized the “correct” language by denying it to other Indians, and thereby made the language selective and exclusive.
ASR services are similarly exclusive; literally providing less control of systems to those with disfavoured accents, and less capability for those in professions which use ASR transcription, e.g., medical professionals. Exclusivity serves an important economic purpose by allowing providers to market beyond the commodity that ASR really is. In this case, it enables marketing of the identity people can have by using it, including the identity of speaking “correct” (white U.S.) English.
5.2 FORESIGHT FOR FUTURE ASR DEVELOPMENT
We provide one example of how the lessons from the historical use of language for dominance may apply to discussions of how ASR will evolve in the future. Academic researchers and industry service providers have made claims that the problems are temporary. For example, Amazon claims “As more people speak to Alexa, and with various accents, Alexa’s understanding will improve” (Harwell, 2018). As another example, researchers state about the ASR racial performance gap: “The likely cause of this shortcoming is insufficient audio data from black speakers when training the models” (Koenecke et al., 2020).
However, historical hindsight has not indicated that ASR services will improve over time without deeper structural changes. First, by selling products that work for one accent above others, technology companies make speakers of other accents less likely to use their products. Members of groups
historically subject to disproportionate state surveillance may be more hesitant to consent to contribute data towards AI technologies (Jo & Gebru, 2020). Both problems operate as a feedback loop to keep disfavored speakers out of future ASR training data. Further, ML algorithms may naturally tend to discriminate against a smaller group in the population because sacrificing performance on that group may allow reducing average “cost” on the population as a whole, even if the training data represents them in proportion to their population (Zou & Schiebinger, 2018). Finally, ground truth labelling is likely to be less accurate for members a disfavored group, and incorrect labels will be fed back into training of future systems (Denton, 2019). Whittaker et al. describe an example in computer vision for autonomous vehicles — the more video of people in wheelchairs used in training, the less likely it was to label a person backing their wheelchair across the street at a crosswalk as a person (Whittaker et al., 2019). Further, in speaker verification, Hutiri and Ding lay out how multiple layers of bias contribute to performance disparities Hutiri & Ding (2022). Instead, historical hindsight indicates that the problem in ASR is more systematic, related to the more fundamental nature of the use of standardized language to divide and provide the benefits of control.
In short, the techno-optimist idea that ASR accent bias will resolve itself in time is unconvincing. Similar to how equity and justice should be centered in each layer of linguistic structure for equitable NLP design Nee et al. (2022), active work will be required to design ASR services that repair damage caused by colonial and post-colonial uses of language and accent to discriminate.
5.3 CONCLUSION
This paper extends the results reported in prior English language ASR performance audits. In part, we provide an audit of ASR using a much larger data set containing speech from a large number of countries of birth as well as a large number of first languages. The quantitative results show how ASR services perform on speakers whose first language is English vs. those for which it is not, and how ASR services perform compared to each other. More critically, we find that, controlling for several related covariates about first language, all ASR services perform significantly worse if the speaker was born outside of a NATO country; in effect, in a country further from United States geopolitical power. We argue that this has historical parallel in the ways in which language has been used historically to maintain global power. By explaining these parallels, and by providing quantitative evidence for the effect, we hope that researchers and developers hoping to reduce disparities in ASR services will be better able to identify the systematic nature of the problems.
6 ETHICS STATEMENT
While the creation and continued upkeep of The Speech Accent Archive does not fall within the scope of this work, we note that all subjects did sign an informed consent form before being recorded, available at https://accent.gmu.edu/pdfs/consent.pdf, and that all data used in this work was anonymized.
It is important to recognize that the data set used in this study overrepresents some groups/backgrounds and underrepresents others, while also not including information on other potential influencing factors such as socioeconomic status. One area of focus for future data collection could be speakers who do not currently reside in the United States.
7 REPRODUCIBILITY STATEMENT
The recordings and associated demographic data used in this experiment are available upon request from the maintainers of The Speech Accent Archive (Weinberger, 2015), or at https://accent. gmu.edu/. Section 3 describes the data set and error metric used in our analysis, while Section B in the Appendix describes the data pipeline, including the steps of recording collection, submission to ASR services, transcript processing, and computation of the error rate. The scripts used in the cleaning and analysis of the data will be hosted on GitHub upon publication of the paper.
A REGRESSION TABLE
Table 1: Speaker-level regression
Dependent Variable: Square Root of Word Information Lost (WIL)
Amazon Google Microsoft Google All Settings Age at Time of Recording 0.001∗ 0.002∗ 0.001∗ 0.001∗
(0.0003) (0.0004) (0.0004) (0.0004)
Age of Onset of English Speaking 0.006∗ 0.004∗ 0.006∗ 0.005∗ (0.0005) (0.001) (0.001) (0.001)
Male 0.015∗ 0.001 0.022∗ 0.005 (0.005) (0.007) (0.006) (0.006)
Naturalistic Learning Environment −0.028∗ −0.042∗ −0.018 −0.029∗ (0.009) (0.012) (0.010) (0.010)
Unknown Learning Environment −0.135 −0.007 −0.061 −0.008 (0.081) (0.110) (0.095) (0.093)
Germanic First Language −0.074∗ −0.082∗ −0.074∗ −0.071∗ (0.012) (0.017) (0.014) (0.014)
First Language English 0.046∗ 0.124∗ 0.058∗ 0.084∗ [Germanic First Language] (0.017) (0.024) (0.020) (0.020)
Birth Country in NATO −0.060∗ −0.041∗ −0.063∗ −0.040∗ (0.007) (0.010) (0.009) (0.008)
Birth Country USA −0.003 −0.135∗ −0.027∗ −0.076∗ [Birth Country in NATO] (0.012) (0.016) (0.014) (0.013)
Lived in English-Speaking Country −0.035∗ −0.053∗ −0.044∗ −0.057∗ (0.009) (0.012) (0.010) (0.010)
Years in English-Speaking Country −0.002∗ −0.002∗ −0.001∗ −0.002∗ [Lived in English-Speaking Country] (0.0003) (0.0005) (0.0004) (0.0004)
Intercept 0.400∗ 0.496∗ 0.283∗ 0.454∗ (0.012) (0.016) (0.014) (0.013)
Observations 2,713 2,713 2,713 2,713 R2 0.332 0.258 0.266 0.274 Adjusted R2 0.329 0.255 0.263 0.271 Residual Std. Error (df = 2,701) 0.139 0.190 0.164 0.160 F Statistic (df = 11; 2,701) 121.930∗ 85.259∗ 88.773∗ 92.863∗
Reference Classes: Female & Academic Learning Environment ∗P < 0.05
B SPEECH ACCENT ARCHIVE RECORDING AND PROCESSING
B.1 DATA COLLECTION
The following information about the data collection process comes from The Speech Accent Archive (Weinberger, 2015).
Subjects were sat 8-10 inches from the microphone and recorded individually in a quiet room. They were each asked the following questions:
• Where were you born? • What is your native language? • What other languages besides English and your native language do you know? • How old are you? • How old were you when you first began to study English? • How did you learn English (academically or naturalistically)? • How long have you lived in an English-speaking country? Which country?
Subjects were asked to look over the elicitation paragraph and ask questions about any unfamiliar words. Finally, they read the passage once into a high-quality recording device.
B.2 DATA PROCESSING
All recordings were initially converted into the mp3 file format and then subsequently converted into the formats necessary for transcription by each of the respective services. This was done to help control any effects which might arise from files being originally recorded in lossy instead of lossless formats.
Audio files were submitted to the respective APIs for all three service providers and the returned transcripts were then concatenated into a single string for each speaker. Across all services, only once did a service fail to return a transcription, and this occurred only for a specific triplet of service, speaker, and transcription dialect. Transcripts were then cleaned using the following process:
1. Semicolons were converted to spaces. 2. Characters were converted to lowercase. 3. Hyphens and forward and back slashes were replaced with spaces. 4. All currency symbols, ampersands, equals signs, octothorpes, and percent signs were sep-
arated by spaces on both sides. 5. The string was split on spaces to create words. 6. Punctuation at the beginning and end of words was replaced with spaces. 7. Leading and trailing spaces were stripped. 8. Words that were only spaces were deleted. 9. Words exactly equal to the characters “3”, “5”, and “6” were converted to “three”, “five”,
and “six”, since these exact numbers appear in the elicitation paragraph as written in Section 3.2.1 and would be correct transcriptions.
10. Spaces were added back in between words and recombined into one string.
After putting all transcripts and the elicitation paragraph through this process, WIL was calculated using the jiwer Python package (Morris et al., 2004).
C CHECKING REGRESSION ASSUMPTIONS
Before looking at the results of our regression, we evaluate the regression assumptions via diagnostic plots in Figures 3, 4, 5, and 6. Due to the square root transform which we performed on WIL (our response variable) in Section 4.2, the diagnostic plots show that our regression assumptions are satisfied, although there are some outliers to investigate. We analyzed each labelled outlier from the plots by hand, first by checking the speaker data to make sure there are no anomalies, and then by listening to the recording to ensure there are no audio issues. Having done this, we proceed to interpret the results of the regression as described in Table 1. | 1. What is the focus of the paper regarding biases in speech recognition systems?
2. What are the strengths and weaknesses of the proposed approach, particularly in its experimental design and analysis?
3. Do you have any concerns about the paper's findings and their implications?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
5. Are there any potential sources of disparities in ASR performance that the authors could explore further? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
The authors present a paper on biases in mainstream commercial speech recognition services from Amazon, Google and Microsoft that:
shows how there are statistically significant performance differences in a word information lost (WIL) metric when the speech that is being transcribed comes from a native vs non-native speaker/age of onset of English speaking.
speakers that learn English in a naturalistic environment have lower WIL than in an academic
WIL is lower for speakers whose first language is a Germanic one
there is a statistical significant correlation between WIL and the speaker's birth country's political alignment to the United States.
The findings that speech recognition systems do worse for non-native, academic environment and non-Germanic L1 vs Germanic L1 are all not particularly surprising and I don't believe are novel findings. Reporting WIL to demonstrate this is likely novel. The main contribution of this paper is showing a political aspect to the disparity.
Strengths And Weaknesses
Strengths
Performs a controlled study of how various commercial recognizers perform with a fixed text that is read by speakers that have provided demographic information.
Weaknesses:
Without samples of ASR results from the collection, cannot tell if there were systemic experimental design biases in the results due to poor text/scoring normalization choices. e.g. are numbers normalized, would a 6 vs "six" be penalized?
WER doesn't necessarily correlate with overall performance; errors on filler or hesitation words are inconsequential, errors on names very problematic. WIL likely has the same problems. Would have been useful to publish WER alongside WIL for those less familiar with WIL.
Does WIL actually reflect user satisfaction? for example, how do aspects of L2 English acquisition compare on completing Alexa requests or transcribing videos?
The lack of differences between Google and All-Google settings seems strange: it would be interesting to see a table matrix showing how each of the sub accent systems compare to the EnUs for matched cases, ie. EnIn tested on EnUs vs EnIn on EnIn, EnUk vs EnUs, etc. why would Google offer multiple accent versions if there is no substantial difference?
The work uses read speech to analyze performance, however most systems expect spontaneous speech. https://www.researchgate.net/publication/221999276_Differences_between_acoustic_characteristics_of_spontaneous_and_read_speech_and_their_effects_on_speech_recognition_performance
the sum of these experimental issues may invalidate the findings, magnify intrinsic biases in the ASR systems, or be inconsequential
the work fails to explore other possible sources of the disparity / it is too quick to assign political power:
is it simply a data issue? do worse performing population simply have less data available to train on? author's could try to train systems with different proportions of accented data
is it technical? is modeling of accents more difficult as they deviate further linguistically from English? does the level of linguistic difference matter, e.g. grammatic verb-order vs pron differences?
is it historical? do the least well performing demographics come from populations
it would be useful to explore and eliminate other possible sources before solely describing power relationship between birth country and united states as the reason for disparities
is this something that holds for other power relationships? would native Russian speakers of Chinese do better on Baidu's ASR systems? how about on American Chinese ASR systems?
Clarity, Quality, Novelty And Reproducibility
The paper is clearly written. Novelty of WIL and its correlation with various aspects of L2 English acquisition e.g. age, Germanic root, etc. are not particularly novel or interesting.
The hypothesis that the disparity of various commercial recognizers is related to political power isn't well researched, ie given other possible explanations that are neither suggested or explored as noted above. There are also many statements that aren't fully backed and tend to be of an opinionated nature, e.g.
"assumption that ASR works for everyone", citation?
"dangerous situation" "particular English language accents ... unable to obtain basic services." - this seems rather exaggerated. in what situations would this disparity result in lack of basic service or a dangerous situation?
"attempted standardization of the English language via the increasing ubiquity of ASR systems is another chapter in a long story of how movements create a standard language... have been tools to maintain power." - a clear link between a disparity in accented vs native ASR performance as a tool to maintain power hasn't been established, how can it be described in this manner?
The work is likely reproducible. The source data from Speech Accent Archive is available on request, but at the moment none of the resulting ASR transcripts or processing scripts are available. Having an anonymized github source would be useful in reviewing the results especially given the conclusions being drawn. |
ICLR | Title
Performance Disparities Between Accents in Automatic Speech Recognition
Abstract
Automatic speech recognition (ASR) services are ubiquitous. Past research has identified discriminatory ASR performance as a function of racial group and nationality. In this paper, we expand the discussion by performing an audit of some of the most popular English language ASR services using a large and global data set of speech from The Speech Accent Archive. We show that, even when controlling for multiple linguistic covariates, ASR service performance has a statistically significant relationship to the political alignment of the speaker’s birth country with respect to the United States’ geopolitical power. We discuss this bias in the context of the historical use of language to maintain global and political power.
1 INTRODUCTION
Automatic speech recognition (ASR) services are a key component of the vision for the future of human-computer interaction. However, many users are familiar with the frustrating experience of repeatedly not being understood by their voice assistant (Harwell, 2018), so much so that frustration with ASR has become a culturally-shared source of comedy (Connell & Florence, 2015; Mitchell, 2018).
Bias auditing of ASR services has quantified these experiences. English language ASR has higher error rates: for Black Americans compared to white Americans (Koenecke et al., 2020; Tatman & Kasten, 2017); for Scottish speakers compared to speakers from California and New Zealand (Tatman, 2017); and for speakers who self-identify as having Indian accents compared to speakers who self-identify as having American accents (Meyer et al., 2020). It should go without saying, but everyone has an accent – there is no “unaccented” version of English (Lippi-Green, 2012). Due to colonization and globalization, different Englishes are spoken around the world. While some English accents may be favored by those with class, race, and national origin privilege, there is no technical barrier to building an ASR system which works well on any particular accent. So we are left with the question, why does ASR performance vary as it does as a function of the global English accent spoken? This paper attempts to address this question quantitatively using a large public data set, The Speech Accent Archive (Weinberger, 2015), which is larger in number of speakers (2,713), number of first languages (212), and number of birth countries (171) than other data sets previously used to audit ASR services, and thus allows us to answer richer questions about ASR biases. Further, by observing historical patterns in how language has shifted power. our paper provides a means for readers to understand how ASR may be operating today.
Historically, accent and language have been used as a tool of colonialism and a justification of oppression. Colonial power, originally British and then of its former colonies, used English as a tool to “civilize” their colonized subjects (Kachru, 1986), and their accents to justify their lower status. English as a lingua franca today provides power to those for whom English is a first language. People around the world are compelled to promote English language learning in education systems in order to avail themselves of the privilege it can provide in the globalized economy (Price, 2014). This spread of English language may be “reproducing older forms of imperial political, economic, and cultural dominance”, but it also exacerbates inequality along neoliberal political economic lines (Price, 2014). In short, the dominance of the English language around the world shifts power in ways that exacerbate inequality.
Further, English is and has historically been used as a nationalist tool in the United States to justify white conservative fears that immigrants pose an economic and political threat to them and has been
used to enforce the cultural assimilation of immigrants (Lippi-Green, 2012). We note that, even within the United States, “Standard American English” is a theoretical concept divorced from the reality of wide variations in spoken English across geographical areas, race and ethnicity, age, class, and gender (Lippi-Green, 2012). As stated in a resolution of the Conference on College Composition and Communication in 1972, “The claim that any one dialect is unacceptable amounts to an attempt of one social group to exert its dominance over another” (Lippi-Green, 2012). The social construct of language has real and significant consequences Nee et al. (2022), for example, allowing people with accents to be passed over for hiring in the United States, despite the Civil Rights Act prohibiting discrimination based on national origin (Matsuda, 1991). Accent-based discrimination can take many forms — people with accents deemed foreign are rated as less intelligent, loyal, and influential (Lawrence, 2021). Systems based on ASR automatically enforce the requirement that one code switch or assimilate in order to be understood, rejecting the “communicative burden” in which two people will “find a communicative middle ground and foster mutual intelligibility when they are motivated, socially and psychologically, to do so” (Lippi-Green, 2012). By design, then, ASR services operate like people who reject their communicative burden, which Lippi-Green reports is often due to their “negative social evaluation of the accent in question” (Lippi-Green, 2012). As Halcyon Lawrence reports from experience as a speaker of Caribbean English, “to create conditions where accent choice is not negotiable by the speaker is hostile; to impose an accent upon another is violent” (Lawrence, 2021).
Furthermore, we are concerned about discriminatory performance of ASR services because of its potential to create a class of people who are unable to use voice assistants, smart devices, and automatic transcription services. If technologists decide that the only user interface for a smart device will be via voice, a person who is unable to be accurately recognized will be unable to use the device at all. As such, ASR technologies have the potential to create a new disability, similar to how print technologies created the print disability “which unites disparate individuals who cannot read printed materials” (Whittaker et al., 2019). The biased performance of ASR, if combined with an assumption that ASR works for everyone, creates a dangerous situation in which those with particular English language accents may find themselves unable to obtain ASR service.
The consequences for someone lacking the ability to obtain reliable ASR may range from inconvenient to dangerous. Serious medical errors may result from incorrect transcription of physician’s notes Zhou et al. (2018), which are increasingly transcribed by ASR. There is, currently, an alarmingly high rate of transcription errors that could result in significant patient consequences, according to physicians who use ASR Goss et al. (2019). Other ASR users could potentially see increased danger: for example for smart wearables that users can use to call for help in an emergency Mrozek et al. (2021); or if one must repeat oneself multiple times when using a voice-controlled navigation system while driving a vehicle (and thus are distracted while driving); or if an ASR is one’s only means for controlling one’s robotic wheelchair Venkatesan et al. (2021).
Given that English language speakers have a multitude of dialects across the world, it is important to consider the ability of English language ASR services to accurately transcribe the speech of their global users. Given past research results (Tatman, 2017; Meyer et al., 2020) and the United States headquarters of Amazon, Google, and Microsoft, we hypothesize that ASR services will transcribe with less error for people who were born in the United States and whose first language is English. We hypothesize that performance of ASR systems is related to the age of onset, the age at which a person first started speaking English, which is known to be highly correlated with perceived accent (Flege et al., 1995; Moyer, 2007; Dollmann et al., 2019). But beyond this, based on the nationalist and neoliberal ways in which language is used to reinforce power, we hypothesize that ASR performance can be explained in part by the power relationship between the United States and speakers’ birth countries. That is, for the same age of onset of English and other related covariates among speakers not born in the United States, we expect that speakers born in countries which are political allies of the United States will have ASR performance that is significantly better than those born in nations which are not aligned politically with the United States. This paper tests and validates these hypotheses by utilizing a data set with significantly more speakers, across a large number of first languages and birth countries, than those which have previously been used for the evaluation of English ASR services.
2 RELATED WORK
2.1 AUDITS OF AUTOMATIC SPEECH RECOGNITION
Our work builds on a small but impactful body of literature investigating the disparities in performance of commercially available English ASR services.
Gender has been inconsistently associated with English ASR performance, significantly better for male speakers over female speakers (Tatman, 2017), female speakers over male speakers (Koenecke et al., 2020; Goldwater et al., 2008), or with no significant performance difference (Tatman & Kasten, 2017; Meyer et al., 2020).
There has been evidence that race and geographic background (especially as it relates to accent and dialect) has impact on ASR performance. Speakers from Scotland were found to have worse English ASR performance than speakers from New Zealand and the United States (Tatman, 2017), while speakers who self-identified as speaking with Indian English accents had transcriptions with higher error rates versus speakers who self-identified as speaking with US English accents (Meyer et al., 2020). Finally, ASR services consistently underperform for Black speakers in comparison to white speakers (Tatman & Kasten, 2017; Koenecke et al., 2020).
Significant research has addressed how to make ASR more robust to accent Liu et al. (2022), e.g., by training accent-based modifications to particular layers of a single model; mapping between the phones of two different accents; or using an adversarial network to separate accent-invariant and -variant features. Unlabelled clustering may be used to find accents that are under-represented; oversampling them can then improve performance Dheram et al. (2022). Our work is to audit rather than to repair ASR disparities.
Multiple aspects of spoken English are affected by the particular accent of the speaker, including both: a) how words are pronounced (Lippi-Green, 2012), and b) what words are used and how sentences are structured. By studying unstructured speech, researchers obtain a view of the discrimination experienced by speakers in transcription accuracy as a function of multiple aspects of accent (Koenecke et al., 2020). This paper presents results from a data set that controls for word choice and sentence structure (Weinberger, 2015) so that we can focus on the impact of word pronunciation on ASR performance.
2.2 HOW LANGUAGE IS USED TO CONTROL AND DIVIDE
Language, and in many cases specifically English, has a history of being “standardized” by those in power as a medium through which to exert influence Nee et al. (2022). Examples range from English being used to rank and hierarchize those deemed “other” in the United States (Lippi-Green, 2012) to the deliberate introduction of variations of English in India during British colonization in order to maintain societal hierarchies and divisions (Naregal, 2001). We argue the impact of ASR systems is towards more standardization of the English language, which is part of a history of how standardization of language has been a tool to maintain power.
2.3 HOW TECHNOLOGY AND AI SHIFT POWER
Auditing ASR services is just one step in building an understanding of how artificial intelligence (AI) has the potential to either consolidate or shift power in society, both on a local and global scale. Whether it be the feedback loops we observe in predictive policing (Ensign et al., 2018; Richardson et al., 2019), the involvement (and experimentation) of Cambridge Analytica in Kenyan elections (Nyabola, 2018), or the significant portion of Amazon Mechanical Turk crowdwork performed by workers in India (Ross et al., 2010; Difallah et al., 2018), all of these phenomena are a part of the coloniality of power:
[T]he coloniality of power can be observed in digital structures in the form of socio-cultural imaginations, knowledge systems and ways of developing and using technology which are based on systems, institutions, and values which persist from the past and remain unquestioned in the present (Mohamed et al., 2020).
Through our audit of English ASR services (with recordings from speakers born in many different countries) in combination with a historical analysis of the ways in which language and power have been closely intertwined, we both bring attention to and report evidence demonstrating the coloniality of ASR as it exists today.
3 MATERIALS AND METHODS
In this section, we describe our data and procedures for our quantitative study of ASR. To stay relevant with the published research on ASR bias, we select ASR services from the five evaluated in Koenecke et al. (2020). The top three performing ASR services in their extensive tests were Google, Amazon, and Microsoft. All three companies are notable not just as cloud service providers, but in the consumer product space in which their ASR services are implemented as part of their own devices. Recordings were transcribed using the three companies’ respective speech-to-text APIs in 2021.
3.1 WORD INFORMATION LOST (WIL)
To evaluate the correctness of the ASR service transcriptions against the elicitation paragraph that speakers read, we use a metric specifically designed for the assessment of ASR known as word information lost (WIL) (Morris, 2002; Morris et al., 2004). WIL is derived from an information theoretic measure of the mutual information between two sources. In short, for our case, it is a distance between the elicitation paragraph and the transcription for a speaker. The WIL is given by:
WIL = 1− H 2
(H + S +D)(H + S + I) , (1)
where H is the number of hits, D is the number of deletions, I is the number of insertions, and S is the number of substitutions between the elicitation paragraph and the transcription.
Compared to another commonly used metric, word error rate (WER), WIL offers distinct advantages:
1. WIL is defined from 0 (all information preserved) to 1 (no information preserved), whereas WER is similarly lower bounded by 0 but has no upper bound.
2. WIL is symmetric between deletions and insertions, unlike WER, which, especially at high error rates, weights insertions more than deletions (Morris, 2002; Morris et al., 2004).
3. The inaccuracies of WER are more severe at higher vs. lower error rates (Morris, 2002), which can be problematic in linear regression studies.
Particularly in the context of our transcription task and the resulting analyses, the advantages of WIL make it the better metric by which to compare ASR performance.
3.2 SPEECH ACCENT ARCHIVE
Our recordings come from The Speech Accent Archive, a collection of recordings of speakers born across the world and with different first languages all reading the same text (Weinberger, 2015). Full details on the methodology used in the recording collection and processing are available in Section B in the Appendix. After answering demographic questions, speakers were presented with the elicitation paragraph in Section 3.2.1 and allowed to ask questions about words they did not understand before reading the paragraph once for the recording.
3.2.1 ELICITATION PARAGRAPH
The elicitation paragraph below was crafted by linguists to include many of the sounds and most of the consonants, vowels, and clusters that are common to English (Weinberger, 2015).
Please call Stella. Ask her to bring these things with her from the store: Six spoons of fresh snow peas, five thick slabs of blue cheese, and maybe a snack for her brother Bob. We also need a small plastic snake and a big toy frog for the
kids. She can scoop these things into three red bags, and we will go meet her Wednesday at the train station.
The methodology for recording, demographic information collected, and careful construction of the elicitation paragraph means that The Speech Accent Archive contains information particularly well-suited to analyzing how English ASR services perform across a global population.
Further, the use of a constant text allows us to produce results that control for particular aspects of accent. Since all speakers read the same paragraph, any disparity in ASR performance will not be a result of word choice or sentence structure or length — heterogeneities in these may complicate ASR disparity analysis Liu et al. (2022). We can use this to narrow in on ASR disparities that result from the manner of speaking the same words across different English language accents.
3.2.2 SPEAKER INFORMATION COLLECTED
The information on speakers collected at the time of recording includes their age, sex (recorded, unfortunately, as a single binary male/female variable), country of birth, first language, age of onset of English speaking, whether they had lived in an Englishs-speaking country, and if so, for how long, and whether the speaker’s English learning environment was academic or naturalistic. Age of onset is particularly useful, as it has been shown to be correlated with perceived accent (Flege et al., 1995; Moyer, 2007; Dollmann et al., 2019). This speaker-level information is integrated into the regression performed in Section 4.2.
3.2.3 DATA DESCRIPTION
The data set includes 2,713 speakers with an average age of 32.6 years, and an average age of onset of English speaking of 8.9. The speakers represent 212 first languages across 171 birth countries. Figure 2 lists the top ten first languages represented in our data set by the number of speakers.
We note that at the time of recording, 2,023 (74.6%) speakers were either current or previous residents of the United States. By default, most ASR services that would be used on and by these speakers while they are in the United States would likely be configured to use the United States English dialect for transcription. For some of our results, we also use this dialect as the default. In addition, in Section 4.3, we conduct analyses using the “best of” all transcription service dialect settings, and show that the results are primarily the same.
4 RESULTS
4.1 GROUP-LEVEL ANALYSIS
In Figure 1, we compare WIL across ASR services grouped by whether a speaker’s first language was English. While overall performance differs between services with Microsoft performing best followed by Amazon and then Google, all services performed significantly better (P < 0.001) for speakers whose first languages was English. On average across all services, WIL was 0.14 lower for first language English speakers. By service, the size disparities followed overall performance, with a difference of 0.17, 0.14, and 0.10 for Google, Amazon, and Microsoft, respectively.
In Figure 2, we highlight mean ASR performance for the ten first languages for which we have the most data. The order of performance found in Figure 1 is maintained across services — across all ten first languages, Microsoft performs the best, followed by Amazon, and then Google. We find that all the services perform best for those whose first language is English, followed by Dutch and German. The worst performance is on speakers whose first languages are Mandarin and Spanish.
4.2 SPEAKER-LEVEL REGRESSION
Motivated by the results in Section 4.1, we construct a linear regression to understand what factors have a significant effect on the performance of ASR services. As discussed in Section 1, the way a person speaks English has and continues to be a basis for discrimination by those in power, and so we include covariates to understand how this discrimination may transfer to ASR services. We want to know if ASR performance is correlated with how the speaker is perceived from a lens of United States global political power. As a broad single measure for this political power, we encode if the speaker’s birth country is a part of the North Atlantic Treaty Organization (NATO) as of January 2022.
Specifically, we include the following covariates for each speaker: age; age of onset of English speaking; sex; English learning environment; if their first language is Germanic (as a measure of first language similarity to English, with the list of Germanic languages from Glottolog (Hammarström et al., 2021)); if their birth country is a part of NATO; if they have ever lived in an English-speaking country, and if so (as a nested variable), for how long. We also create nested covariates for English and the United States in the Germanic first language and birth country in NATO covariates respectively to separate the effects of English and the United States specifically.
In order to satisfy the assumptions for linear regression, in particular the normality of the residuals, we perform a square root transform on our response variable, WIL. The diagnostic plots for the regression assumptions can be found in Section C in the Appendix.
4.2.1 REGRESSION RESULTS
The results of the regression are shown in Table 1 in Section A of the Appendix under the headings Amazon, Google, and Microsoft. We find multiple covariates that have a significant effect across all three services (P < 0.05). Highlights of these findings include:
• WIL increases with a later age of onset of English speaking. As described in Section 3.2, age of onset is correlated with perceived accent.
• WIL decreases with speaking a Germanic first language, having controlled for the effect on WIL of English as a first language, which is also significant.
• Having lived in an English-speaking country has a negative effect on WIL, as does the number of years spent living an English-speaking country.
• Finally, being born in a country that is a part of NATO but is not the United States is associated with a lower WIL.
The final result suggests that a person’s birth in a country proximate to the United States’ geopolitical power is related to how ASR services perform on their speech.
Some covariates are only significant for certain services - ASR services from Amazon and Microsoft perform significantly worse on males than females. Google and Microsoft perform significantly better for those born in the United States, while Amazon and Google perform significantly better for those who learned English in a naturalistic environment rather than an academic one.
4.3 TRANSCRIPTIONS USING OTHER ENGLISH SETTINGS
As explained in Section 3.2.3, a majority (74.6%) of the speakers in the data set were or had been residents of the United States at the time of recording. Thus, we used the United States English setting of all of the ASR services, as this was likely the settings which would be used on or by them.
However, ASR services do offer more settings for English. It is reasonable to ask how much using all of the English language settings available for a transcription service could improve WIL. We decided to understand this question for the service with the worst overall performance and largest disparity in performance, Google, as shown in Figure 1.
We transcribed the recordings using all available English settings that Google supported. Specifically, we try these English settings on Google’s ASR service: Australia, Canada, Ghana, Hong Kong, India, Ireland, Kenya, New Zealand, Nigeria, Pakistan, Philippines, Singapore, South Africa, Tanzania, United Kingdom and the United States. To give Google the best opportunity for improvement, for each speaker we took the lowest WIL across all settings’ transcriptions. Note that while this is guaranteed to offer the largest improvement, it is unrealistic to do in practice, since it requires knowledge of the ground-truth transcript. We refer to this as Google All Settings.
A comparison to Google’s original performance is offered in Figure 1, where Google refers to transcriptions generated only using the United States setting, and Google All Settings refers to WIL generated using the method above. Originally, we saw that for Google, first language English speakers had a WIL that was on average 0.17 lower than first language non-English speakers. When using the technique for Google All Settings on both first language English and non-English speakers, we notice that a disparity of a similar size (0.14, P < 0.001) still exists. In fact, even when we only use the United States English setting for speakers with a first language of English and allow first language non-English speakers to take the lowest WIL from all settings, the disparity is still a considerable 0.10 (P < 0.001).
We also note that the significance of the factors in the linear regression did not change when compared to Google’s original transcription performance. This result is displayed in Table 1 in Section A of the Appendix under the column Google All Settings. This suggests that even Google’s attempts to adapt their technology to different global settings are subject to the same biases we highlighted originally.
5 DISCUSSION
Across all three ASR services tested (Amazon, Google, and Microsoft), we find significant disparities in performance between those whose first language is English and those whose first language is not English. Moreover, we find that these disparities are connected not only to the age at which an individual began speaking English, the environment they learned it in, and whether or not their first language was Germanic, but also whether or not their birth country is a part of NATO, a representation of political alignment with the United States. When, with one of the services tested (Google), we again transcribed recordings using all of the available English language locality settings, we saw all of our significant results remain the same, implying that the current set of international English language models offered does not solve the inherent problems of bias we observe in ASR.
5.1 HISTORICAL CONTEXT
In many ways, we are not surprised by the ways in which a neoliberal capitalist power structure provides better services to a select group of English language speakers. In many ways, it parallels historical colonial use of language, first to standardize language for the benefit of those in power, and second to benefit from hierarchies in language dialects.
Many of today’s globally dominant languages were imposed violently within its nation of origin and then the colonial encounter across the world. For instance, the French language was used to turn “peasants into Frenchmen” (Weber, 1976). Within France, French standardization served the dominant class, suppressed the culture of many of its own people, and worked to discipline labor and extract greater profit. ASR systems similarly serve to standardize language — any accent deemed nonstandard is not understood. Speakers are compelled to mimic the dominant accent, which is felt as coerced or even violent (Lawrence, 2021). Providers don’t wait to ensure that their ASR product works well across dialects before deploying it – that would break the “move fast and break things” rule and allow competitors to establish market dominance Hicks (2021).
Second, the notion of a “standard” language dialect enables social disparity. Education policy during the US occupation of the Philippines enforced racialization; English language lessons were used to emphasize the inferior accent, hygiene and cleanliness of the lower status “Oriental” students (McElhinny & Heller, 2020). In India, British education policies deliberately taught different English dialects to different castes to grant cultural and social authority to a class of elites that aided British governance. Dominant castes entered English medium schools and were tutored by people from England (Gauba, 1974), and were expected to “refine” the “vernacular” languages and teach those to other Indians. They did so while securing their own privileged access to colonial English (Chandra, 2012) and the state power that came with it as the favoured language of governance. The elite monopolized the “correct” language by denying it to other Indians, and thereby made the language selective and exclusive.
ASR services are similarly exclusive; literally providing less control of systems to those with disfavoured accents, and less capability for those in professions which use ASR transcription, e.g., medical professionals. Exclusivity serves an important economic purpose by allowing providers to market beyond the commodity that ASR really is. In this case, it enables marketing of the identity people can have by using it, including the identity of speaking “correct” (white U.S.) English.
5.2 FORESIGHT FOR FUTURE ASR DEVELOPMENT
We provide one example of how the lessons from the historical use of language for dominance may apply to discussions of how ASR will evolve in the future. Academic researchers and industry service providers have made claims that the problems are temporary. For example, Amazon claims “As more people speak to Alexa, and with various accents, Alexa’s understanding will improve” (Harwell, 2018). As another example, researchers state about the ASR racial performance gap: “The likely cause of this shortcoming is insufficient audio data from black speakers when training the models” (Koenecke et al., 2020).
However, historical hindsight has not indicated that ASR services will improve over time without deeper structural changes. First, by selling products that work for one accent above others, technology companies make speakers of other accents less likely to use their products. Members of groups
historically subject to disproportionate state surveillance may be more hesitant to consent to contribute data towards AI technologies (Jo & Gebru, 2020). Both problems operate as a feedback loop to keep disfavored speakers out of future ASR training data. Further, ML algorithms may naturally tend to discriminate against a smaller group in the population because sacrificing performance on that group may allow reducing average “cost” on the population as a whole, even if the training data represents them in proportion to their population (Zou & Schiebinger, 2018). Finally, ground truth labelling is likely to be less accurate for members a disfavored group, and incorrect labels will be fed back into training of future systems (Denton, 2019). Whittaker et al. describe an example in computer vision for autonomous vehicles — the more video of people in wheelchairs used in training, the less likely it was to label a person backing their wheelchair across the street at a crosswalk as a person (Whittaker et al., 2019). Further, in speaker verification, Hutiri and Ding lay out how multiple layers of bias contribute to performance disparities Hutiri & Ding (2022). Instead, historical hindsight indicates that the problem in ASR is more systematic, related to the more fundamental nature of the use of standardized language to divide and provide the benefits of control.
In short, the techno-optimist idea that ASR accent bias will resolve itself in time is unconvincing. Similar to how equity and justice should be centered in each layer of linguistic structure for equitable NLP design Nee et al. (2022), active work will be required to design ASR services that repair damage caused by colonial and post-colonial uses of language and accent to discriminate.
5.3 CONCLUSION
This paper extends the results reported in prior English language ASR performance audits. In part, we provide an audit of ASR using a much larger data set containing speech from a large number of countries of birth as well as a large number of first languages. The quantitative results show how ASR services perform on speakers whose first language is English vs. those for which it is not, and how ASR services perform compared to each other. More critically, we find that, controlling for several related covariates about first language, all ASR services perform significantly worse if the speaker was born outside of a NATO country; in effect, in a country further from United States geopolitical power. We argue that this has historical parallel in the ways in which language has been used historically to maintain global power. By explaining these parallels, and by providing quantitative evidence for the effect, we hope that researchers and developers hoping to reduce disparities in ASR services will be better able to identify the systematic nature of the problems.
6 ETHICS STATEMENT
While the creation and continued upkeep of The Speech Accent Archive does not fall within the scope of this work, we note that all subjects did sign an informed consent form before being recorded, available at https://accent.gmu.edu/pdfs/consent.pdf, and that all data used in this work was anonymized.
It is important to recognize that the data set used in this study overrepresents some groups/backgrounds and underrepresents others, while also not including information on other potential influencing factors such as socioeconomic status. One area of focus for future data collection could be speakers who do not currently reside in the United States.
7 REPRODUCIBILITY STATEMENT
The recordings and associated demographic data used in this experiment are available upon request from the maintainers of The Speech Accent Archive (Weinberger, 2015), or at https://accent. gmu.edu/. Section 3 describes the data set and error metric used in our analysis, while Section B in the Appendix describes the data pipeline, including the steps of recording collection, submission to ASR services, transcript processing, and computation of the error rate. The scripts used in the cleaning and analysis of the data will be hosted on GitHub upon publication of the paper.
A REGRESSION TABLE
Table 1: Speaker-level regression
Dependent Variable: Square Root of Word Information Lost (WIL)
Amazon Google Microsoft Google All Settings Age at Time of Recording 0.001∗ 0.002∗ 0.001∗ 0.001∗
(0.0003) (0.0004) (0.0004) (0.0004)
Age of Onset of English Speaking 0.006∗ 0.004∗ 0.006∗ 0.005∗ (0.0005) (0.001) (0.001) (0.001)
Male 0.015∗ 0.001 0.022∗ 0.005 (0.005) (0.007) (0.006) (0.006)
Naturalistic Learning Environment −0.028∗ −0.042∗ −0.018 −0.029∗ (0.009) (0.012) (0.010) (0.010)
Unknown Learning Environment −0.135 −0.007 −0.061 −0.008 (0.081) (0.110) (0.095) (0.093)
Germanic First Language −0.074∗ −0.082∗ −0.074∗ −0.071∗ (0.012) (0.017) (0.014) (0.014)
First Language English 0.046∗ 0.124∗ 0.058∗ 0.084∗ [Germanic First Language] (0.017) (0.024) (0.020) (0.020)
Birth Country in NATO −0.060∗ −0.041∗ −0.063∗ −0.040∗ (0.007) (0.010) (0.009) (0.008)
Birth Country USA −0.003 −0.135∗ −0.027∗ −0.076∗ [Birth Country in NATO] (0.012) (0.016) (0.014) (0.013)
Lived in English-Speaking Country −0.035∗ −0.053∗ −0.044∗ −0.057∗ (0.009) (0.012) (0.010) (0.010)
Years in English-Speaking Country −0.002∗ −0.002∗ −0.001∗ −0.002∗ [Lived in English-Speaking Country] (0.0003) (0.0005) (0.0004) (0.0004)
Intercept 0.400∗ 0.496∗ 0.283∗ 0.454∗ (0.012) (0.016) (0.014) (0.013)
Observations 2,713 2,713 2,713 2,713 R2 0.332 0.258 0.266 0.274 Adjusted R2 0.329 0.255 0.263 0.271 Residual Std. Error (df = 2,701) 0.139 0.190 0.164 0.160 F Statistic (df = 11; 2,701) 121.930∗ 85.259∗ 88.773∗ 92.863∗
Reference Classes: Female & Academic Learning Environment ∗P < 0.05
B SPEECH ACCENT ARCHIVE RECORDING AND PROCESSING
B.1 DATA COLLECTION
The following information about the data collection process comes from The Speech Accent Archive (Weinberger, 2015).
Subjects were sat 8-10 inches from the microphone and recorded individually in a quiet room. They were each asked the following questions:
• Where were you born? • What is your native language? • What other languages besides English and your native language do you know? • How old are you? • How old were you when you first began to study English? • How did you learn English (academically or naturalistically)? • How long have you lived in an English-speaking country? Which country?
Subjects were asked to look over the elicitation paragraph and ask questions about any unfamiliar words. Finally, they read the passage once into a high-quality recording device.
B.2 DATA PROCESSING
All recordings were initially converted into the mp3 file format and then subsequently converted into the formats necessary for transcription by each of the respective services. This was done to help control any effects which might arise from files being originally recorded in lossy instead of lossless formats.
Audio files were submitted to the respective APIs for all three service providers and the returned transcripts were then concatenated into a single string for each speaker. Across all services, only once did a service fail to return a transcription, and this occurred only for a specific triplet of service, speaker, and transcription dialect. Transcripts were then cleaned using the following process:
1. Semicolons were converted to spaces. 2. Characters were converted to lowercase. 3. Hyphens and forward and back slashes were replaced with spaces. 4. All currency symbols, ampersands, equals signs, octothorpes, and percent signs were sep-
arated by spaces on both sides. 5. The string was split on spaces to create words. 6. Punctuation at the beginning and end of words was replaced with spaces. 7. Leading and trailing spaces were stripped. 8. Words that were only spaces were deleted. 9. Words exactly equal to the characters “3”, “5”, and “6” were converted to “three”, “five”,
and “six”, since these exact numbers appear in the elicitation paragraph as written in Section 3.2.1 and would be correct transcriptions.
10. Spaces were added back in between words and recombined into one string.
After putting all transcripts and the elicitation paragraph through this process, WIL was calculated using the jiwer Python package (Morris et al., 2004).
C CHECKING REGRESSION ASSUMPTIONS
Before looking at the results of our regression, we evaluate the regression assumptions via diagnostic plots in Figures 3, 4, 5, and 6. Due to the square root transform which we performed on WIL (our response variable) in Section 4.2, the diagnostic plots show that our regression assumptions are satisfied, although there are some outliers to investigate. We analyzed each labelled outlier from the plots by hand, first by checking the speaker data to make sure there are no anomalies, and then by listening to the recording to ensure there are no audio issues. Having done this, we proceed to interpret the results of the regression as described in Table 1. | 1. What is the focus of the paper on speech recognition systems?
2. What are the strengths and weaknesses of the paper regarding its contributions and technical aspects?
3. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
4. What are the concerns or questions raised by the reviewer regarding the paper's findings and implications? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
This paper evaluates the performance differences among accents in asr systems such as amazon, google and microsoft’s asr systems. The systems have biased word error rates when evaluated by a large and global dataset of speech from the speech accent archive. This is more like an investigation report, instead of a reach paper with novel methodology or technical solutions.
Strengths And Weaknesses
Strong:
1 Biases were found in existing commercial systems such as amazon, google and microsoft’s asr systems for voices from different regions and different accents;
Weak:
1 Could not find technical novel parts and this paper is an investigation report without detailed technical solutions.
Clarity, Quality, Novelty And Reproducibility
The motivations and investigation details are clear. This paper is a high-quality investigation report of the bias of existing asr systems. Could not find much novelty parts in terms of technical part.
Detailed comments and questions:
1 Existing systems are bias – not by design – but by the amount of available data for training asr models. |
ICLR | Title
Meta-free few-shot learning via representation learning with weight averaging
Abstract
Recent studies on few-shot classification using transfer learning pose challenges to the effectiveness and efficiency of episodic meta-learning algorithms. Transfer learning approaches are a natural alternative, but they are restricted to few-shot classification. Moreover, little attention has been on the development of probabilistic models with well-calibrated uncertainty from few-shot samples, except for some Bayesian episodic learning algorithms. To tackle the aforementioned issues, we propose a new transfer learning method to obtain accurate and reliable models for few-shot regression and classification. The resulting method does not require episodic meta-learning and is called meta-free representation learning (MFRL). MFRL first finds low-rank representation generalizing well on meta-test tasks. Given the learned representation, probabilistic linear models are fine-tuned with few-shot samples to obtain models with well-calibrated uncertainty. The proposed method not only achieves the highest accuracy on a wide range of few-shot learning benchmark datasets but also correctly quantifies the prediction uncertainty. In addition, weight averaging and temperature scaling are effective in improving the accuracy and reliability of few-shot learning in existing meta-learning algorithms with a wide range of learning paradigms and model architectures.
N/A
Recent studies on few-shot classification using transfer learning pose challenges to the effectiveness and efficiency of episodic meta-learning algorithms. Transfer learning approaches are a natural alternative, but they are restricted to few-shot classification. Moreover, little attention has been on the development of probabilistic models with well-calibrated uncertainty from few-shot samples, except for some Bayesian episodic learning algorithms. To tackle the aforementioned issues, we propose a new transfer learning method to obtain accurate and reliable models for few-shot regression and classification. The resulting method does not require episodic meta-learning and is called meta-free representation learning (MFRL). MFRL first finds low-rank representation generalizing well on meta-test tasks. Given the learned representation, probabilistic linear models are fine-tuned with few-shot samples to obtain models with well-calibrated uncertainty. The proposed method not only achieves the highest accuracy on a wide range of few-shot learning benchmark datasets but also correctly quantifies the prediction uncertainty. In addition, weight averaging and temperature scaling are effective in improving the accuracy and reliability of few-shot learning in existing meta-learning algorithms with a wide range of learning paradigms and model architectures.
1 INTRODUCTION
Currently, the vast majority of few-shot learning methods are within the general paradigm of metalearning (a.k.a. learning to learn) (Bengio et al., 1991; Schmidhuber, 1987; Thrun & Pratt, 1998), which learns multiple tasks in an episodic manner to distill transferrable knowledge (Vinyals et al., 2016; Finn et al., 2017; Snell et al., 2017). Although many episodic meta-learning methods report state-of-the-art (SOTA) performance, recent studies show that simple transfer learning methods with fixed embeddings (Chen et al., 2019; Tian et al., 2020) can achieve similar or better performance in few-shot learning. It is found that the effectiveness of optimization-based meta-learning algorithms is due to reusing high-quality representation, instead of rapid learning of task-specific representation (Raghu et al., 2020). The quality of the presentation is not quantitatively defined, except for some empirical case studies (Goldblum et al., 2020). Recent machine learning theories (Saunshi et al., 2021) indicate that low-rank representation leads to better sample efficiency in learning a new task. However, those theoretical studies are within the paradigm of meta-learning and do not reveal how to obtain low-rank representation for few-shot learning outside the realm of meta-learning. This motivates us to investigate ways to improve the representation for adapting to new few-shot tasks in a meta-free manner by taking the advantage of simplicity and robustness in transfer learning.
In parallel, existing transfer learning methods also have limitations. That is, the existing transfer learning methods may not find representation generalizing well to unseen few-shot tasks (Chen et al., 2019; Dhillon et al., 2020) , compared with state-of-the-art meta-learning methods (Ye et al., 2020; Zhang et al., 2020). Although some transfer learning methods utilize knowledge distillation and self-supervised training to achieve strong performance in few-shot classification, they are restricted to few-shot classification problems (Mangla et al., 2020; Tian et al., 2020). To the best of our knowledge, no transfer learning method is developed to achieve similar performance to metalearning in few-shot regression. As such, it is desirable to have a transfer learning method that finds high-quality representation generalizing well to unseen classification and regression problems.
The last limitation of the existing transfer learning methods in few-shot learning is the lack of uncertainty calibration. Uncertainty quantification is concerned with the quantification of how likely certain outcomes are. Despite a plethora of few-shot learning methods (in fact, machine learning in general) to improve the point estimation accuracy, few methods are developed to get probabilistic models with improved uncertainty calibration by integrating Bayesian learning into episodic metatraining (Grant et al., 2018; Finn et al., 2018; Yoon et al., 2018; Snell & Zemel, 2021). Few-shot learning models can be used in risk-averse applications such as medical diagnosis (Prabhu et al., 2019). The diagnosis decision is made on not only point estimation but also probabilities associated with the prediction. The risk of making wrong decisions is significant when using uncalibrated models (Begoli et al., 2019). Thus, the development of proper fine-tuning steps to achieve well-calibrated models is the key towards practical applications of transfer learning in few-shot learning.
In this paper, we develop a simple transfer learning method as our own baseline to allow easy regularization towards more generalizable representation and calibration of prediction uncertainty. The regularization in the proposed transfer learning method works for regression and classification problems so that we can handle both problems within a common architecture. The calibration procedure is easily integrated into the developed transfer learning method to obtain few-shot learning models with good uncertainty quantification. Therefore, the resulting method, called Meta-Free Representation Learning (MFRL), overcomes the aforementioned limitations in existing transfer learning methods for few-shot learning. Our empirical studies demonstrate that the relatively overlooked transfer learning method can achieve high accuracy and well-calibrated uncertainty in few-shot learning when it is combined with the proper regularization and calibration. Those two tools are also portable to meta-learning methods to improve accuracy and calibration, but the improvement is less significant compared with that of transfer learning.
We use stochastic weight averaging (SWA) (Izmailov et al., 2018), which is agnostic to loss function types, as implicit regularization to improve the generalization capability of the representation. We also shed light on that the effectiveness of SWA is due to its bias towards low-rank representation. To address the issue of uncertainty quantification, we fine-tune appropriate linear layers during the meta-test phase to get models with well-calibrated uncertainty. In MFRL, hierarchical Bayesian linear models are used to properly capture the uncertainty from very limited training samples in few-shot regression, whereas the softmax output is scaled with a temperature parameter to make the few-shot classification model well-calibrated. Our method is the first one to achieve well-calibrated few-shot models by only fine-tuning probabilistic linear models in the meta-test phase, without any learning mechanisms related to the meta-training or representation learning phase.
Our contributions in this work are summarized as follows:
• We propose a transfer learning method that can handle both few-shot regression and classification problems with performance exceeding SOTA.
• For the first time, we empirically find the implicit regularization of SWA towards low-rank representation, which is a useful property in transferring to few-shot tasks.
• The proposed method results in well-calibrated uncertainty in few-shot learning models while preserving SOTA accuracy.
• The implicit regularization of SWA and temperature scaling factor can be applied to existing meta-learning methods to improve their accuracy and reliability in few-shot learning.
2 RELATED WORK
Episodic meta-learning approaches can be categorized into metric-based and optimization-based methods. Metric-based methods project input data to feature vectors through nonlinear embeddings and compare their similarity to make the prediction. Examples of similarity metrics include the weighted L1 metric (Koch et al., 2015), cosine similarity (Qi et al., 2018; Vinyals et al., 2016), and Euclidean distance to class-mean representation (Snell et al., 2017). Instead of relying on predefined metrics, learnable similarity metrics are introduced to improve the few-shot classification performance (Oreshkin et al., 2018; Sung et al., 2018). Recent metric-based approaches focus on developing task-adaptive embeddings to improve few-shot classification accuracy. Those task-adaptive embeddings include attention mechanisms for feature transformation (Fei et al., 2021; Gidaris & Komodakis, 2018; Ye et al., 2020; Zhang et al., 2021), graph neural networks (Garcia & Estrach,
2018), implicit class representation (Ravichandran et al., 2019), and task-dependent conditioning (Oreshkin et al., 2018; Yoon et al., 2020; 2019). Although metric-based approaches achieve strong performance in few-shot classification, they cannot be directly applied to regression problems.
Optimization-based meta-learning approaches try to find transferrable knowledge and adapt to new tasks quickly. An elegant and powerful meta-learning approach, termed model-agnostic metalearning (MAML), solves a bi-level optimization problem to find good initialization of model parameters (Finn et al., 2017). However, MAML has a variety of issues, such as sensitivity to neural network architectures, instability during training, arduous hyperparameter tuning, and high computational cost. On this basis, some follow-up methods have been developed to simplify, stabilize and improve the training process of MAML (Antoniou et al., 2018; Flennerhag et al., 2020; Lee & Choi, 2018; Nichol et al., 2018; Park & Oliva, 2019). In practice, it is very challenging to learn high-dimensional model parameters in a low-data regime. Latent embedding optimization (LEO) attempts to learn low-dimensional representation to generate high-dimensional model parameters (Rusu et al., 2019). Meanwhile, R2-D2 (Bertinetto et al., 2019) and MetaOptNet (Lee et al., 2019) reduce the dimensionality of trainable model parameters by freezing feature extraction layers during inner loop optimization. Note that the proposed method is fundamentally different from R2-D2 and MetaOptNet because our method requires neither episodic meta-learning nor bi-level optimization.
Transfer learning approaches first learn a feature extractor on all training data through standard supervised learning, and then fine-tune a linear predictor on top of the learned feature extractor in a new task (Chen et al., 2019). However, vanilla transfer learning methods for few-shot learning do not take extra steps to make the learned representation generalizing well to unseen meta-test tasks. Some approaches in this paradigm are developed to improve the quality of representation and boost the accuracy of few-shot classification, including cooperative ensembles (Dvornik et al., 2019), knowledge distillation (Tian et al., 2020), and auxiliary self-supervised learning (Mangla et al., 2020). Nevertheless, those transfer learning methods are restricted to few-shot classification. MFRL aims to find representation generalizing well from the perspective of low-rank representation learning, which is supported by recent theoretical studies (Saunshi et al., 2021). Furthermore, MFLR is the first transfer learning method that can handle both few-shot regression and classification problems and make predictions with well-calibrated uncertainty.
3 BACKGROUND
3.1 EPISODIC META-LEARNING
In episodic meta-learning, the meta-training data contains T episodes or tasks, where the τ th episode consists of dataDτ = {(xτ,j ,yτ,j)}Nτj=1 withNτ samples. Tasks and episodes are used interchangeably in the rest of the paper. Episodic meta-learning algorithms aim to find common model parameters θ which can be quickly adapted to task-specific parameters φτ (τ = 1, ..., T ). For example, MAML-type algorithms assume φτ is one or a few gradient steps away from θ (Finn et al., 2017; 2018; Grant et al., 2018; Yoon et al., 2018), while other meta-learning approaches assume that φτ and θ share the parameters in the feature extractor and only differ in the top layer (Bertinetto et al., 2019; Lee et al., 2019; Snell et al., 2017).
3.2 STOCHASTIC WEIGHT AVERAGING
The idea of stochastic weight averaging (SWA) along the trajectory of SGD goes back to Polyak–Ruppert averaging (Polyak & Juditsky, 1992). Theoretically, weight averaging results in faster convergence for linear models in supervised learning and reinforcement learning(Bach & Moulines, 2013; Lakshminarayanan & Szepesvari, 2018). In deep learning, we are more interested in tail stochastic weight averaging (Jain et al., 2018), which averages the weights after T training epochs. The averaged model parameters θSWA can be computed by running s additional training epochs using SGD
θSWA = 1
s T+s∑ i=T+1 θi, (1)
where θi denotes the model parameters at the end of the i-th epoch. SWA has been applied to supervised learning of deep neural neural networks to achieve higher test accuracy (Izmailov et al., 2018).
4 METHODOLOGY
The proposed method is a two-step learning algorithm: meta-free representation learning followed by fine-tuning. We employ SWA to make the learned representation low-rank and better generalize to meta-test data. Given a meta-test task, a new top layer is fine-tuned with few-shot samples to obtain probabilistic models with well-calibrated uncertainty. Note that MFRL can be used for both regression and classification depending on the loss function. The pseudocode of MFRL is presented in Appendix A.1.
4.1 REPRESENTATION LEARNING
Common representation can be learned via maximization of the likelihood of all training data with respect to θ rather than following episodic meta-learning. To do so, we group the data Dτ = {(xτ,j ,yτ,j)}Nτj=1 from all meta-training tasks into a single dataset Dtr. Given aggregated training dataDtr = {X,Y}, representation can be learned by maximizing the likelihood p (Dtr | θ) with respect to θ. Let θ = [θf ,W], where θf represents parameters in the feature extractor and W denotes the parameters in the top linear layer. The feature extractor h(x) ∈ Rp is a neural network parameterized by θf and outputs a feature vector of dimension p. The specific form of the loss function depends on whether the task is regression or classification and can be given as follows:
LRP (θ) = − log p (Dtr | θ) = { LMSE(θ), regression LCE(θ), classification
where
LMSE(θ) = 1
2N ′ T∑ τ=1 Nτ∑ j=1 ( yτ,j −w>τ h (xτ,j) )2 , (2)
LCE(θ) = − N ′∑ j=1 C∑ c=1 yj,c log exp(w>c h(xj))∑C c′=1 exp(w > c′h(xj))
(3)
For regression problems, the model learns T regression tasks (W = [w1, ...,wT ] ∈ R(p+1)×T ) simultaneously using the loss function LMSE given in Eq. 2, whereas the model learns a C-class classification model 1 (W = [w1, ...,wC ] ∈ R(p+1)×C) for classification problems using the loss function LCE in Eq. 3. The loss function - either Eq. 2 or 3 - can be minimized through standard stochastic gradient descent, where N ′ = ∑T τ=1Nτ is the total number of training samples.
Post-processing via SWA Minimizing the loss functions in Eq. 2 and 3 by SGD may not necessarily result in representation that generalizes well to few-shot learning tasks in the meta-test set. The last hidden layer of a modern deep neural network is high-dimensional and may contain spurious features that over-fit the meta-training data. Recent meta-learning theories indicate that better sample complexity in learning a new task can be achieved via low-rank representation, whose singular values decay faster (Saunshi et al., 2021). We aim to find low-rank representation Φ = h(X) without episodic meta-learning, which is equivalent to finding the conjugate kernel KC = ΦΦ> with fast decaying eigenvalues. To link the representation with the parameter space, we can linearize the neural network by the first-order Taylor expansion at θT and get the finite width neural tangent kernel (NTK) KNTK(X,X) = J(X)J(X)>, where J(X) = ∇θfθ(X) ∈ RN
′×|θ| is the Jacobian matrix, and KNTK is a composite kernel containing KC (Fan & Wang, 2020). The distributions of eigenvalues for KNTK and KC are empirically similar. Analyzing KNTK could shed light on the properties of KC. In parallel, KNTK shares the same eigenvalues of the Gauss-Newton matrix G = 1N ′ J(X) >J(X). For linearized networks with squared loss, the Gauss-Newton matrix G well
1C is the total number of classes in Dtr. Learning a C-class classification model solves all possible tasks in the meta-training dataset because each task Dτ only contains a subset of C classes.
approximates the Hessian matrix H when y is well-described by fθ(x) (Martens, 2020). This is the case when SGD converges to θT within a local minimum basin. A Hessian matrix with a lot of small eigenvalues corresponds to a flat minimum, where the loss function is less sensitive to the perturbation of model parameters (Keskar et al., 2017). It is known that averaging the weights after SGD convergence in a local minimum basin pushes θT towards the flat side of the loss valley (He et al., 2019). As a result, SWA could result in a faster decay of eigenvalues in the kernel matrix, and thus low-rank representation. Our conjecture about SWA as implicit regularization towards low-rank representation is empirically verified in Section 5.
4.2 FINE-TUNING
After representation learning is complete, W is discarded and θf is frozen in a new few-shot task. Given the learned representation, we train a new probabilistic top layer in a meta-test task using fewshot samples. The new top layer will be configured differently depending on whether the few-shot task is a regression or a classification problem.
In a few-shot regression task, we learn a new linear regression model y = w>h(x) + on a fixed feature extractor h(x) ∈ Rp with few-shot training data D = {(xi, yi)}ni=1, where w denotes the model parameters and is Gaussian noise with zero mean and variance σ2. To avoid interpolation on few-shot training data (n p), a Gaussian prior p(w | λ) = ∏p i=0N (wi | 0, λ) is placed over w, where λ is the precision in the Gaussian prior. However, it is difficult to obtain an appropriate value for λ in a few-shot regression task because no validation data is available in D. Hierarchical Bayesian linear models can be used to obtain optimal regularization strength and grounded uncertainty estimation using few-shot training data only. To complete the specification of the hierarchical Bayesian model, the hyperpriors on λ and σ2 are defined as p(λ) = Gamma (λ | a, b) and p(σ−2) = Gamma(σ−2 | c, d), respectively. The hyper-priors become very flat and non-informative when a, b, c and d are set to very small values. The posterior over all latent variables given the data is p ( w, λ, σ2 | X,y ) , where X = {x}ni=1 and y = {yi}ni=1. However, the
posterior distribution p ( w, λ, σ2 | X,y ) is intractable. The iterative optimization based approximate inference (Tipping, 2001) is chosen because it is highly efficient. The point estimation for λ and σ2 is obtained by maximizing the marginal likelihood function p ( y | X, λ, σ2 ) . The posterior
of model parameters p ( w | X,y, λ, σ2 ) is calculated using the estimated λ and σ2. Previous two steps are repeated alternately until convergence.
The predictive distribution for a new sample x∗ is
p ( y∗ | x∗,X,y, λ, σ2 ) = ∫ p ( y∗ | x∗,w, σ2 ) p ( w | X,y, λ, σ2 ) dw, (4)
which can be computed analytically because both distributions on the right hand side of Eq. 4 are Gaussian. Consequently, hierarchical Bayesian linear models avoids over-fitting on few-shot training data and quantifies predictive uncertainty.
In a few-shot classification task, a new logistic regression model is learned with the post-processed representation. A typical K-way n-shot classification task D = {(xi, yi)}nKi=1 consists of K classes (different from meta-training classes) and n training samples per class. Minimizing an un-regularized cross-entropy loss results in a significantly over-confident classification model because the norm of logistic regression model parameters W ∈ R(p+1)×K becomes very large when few-shot training samples can be perfectly separated in the setting nK p. A weighted L2 regularization term is added to the cross-entropy loss to mitigate the issue
L(W) = − nK∑ i=1 K∑ c=1 yi,c log exp(w>c h(xi))∑K c′=1 exp(w > c′h(xi)) + λ K∑ c=1 w>c wc, (5)
where λ is the regularization coefficient, which affects the prediction accuracy and uncertainty. It is difficult to select an appropriate value of λ in each of the meta-test tasks due to the lack of validation data in D. We instead treat λ as a global hyper-parameter so that the value of λ should be determined based on the accuracy on meta-validation data. Note that the selected λ with high validation accuracy does not necessarily lead to well calibrated classification models. As such, we introduce the temperature scaling factor (Guo et al., 2017) as another global hyper-parameter to
scale the softmax output. Given a test sample x∗, the predicted probability for class c becomes
pc = exp(w>c h(x∗)/T )∑K c′=1 exp(w > c′h(x∗)/T ) , (6)
where T is the temperature scaling factor. In practice, we select the L2 regularization coefficient λ and the temperature scaling factor T as follows. At first, we set T to 1, and do grid search on the meta-validation data to find the λ resulting in the highest meta-validation accuracy. However, finetuning λ does not ensure good calibration. It is the temperature scaling factor that ensures the good uncertainty calibration. Similarly, we do grid search of T on the meta-validation set, and choose the temperature scaling factor resulting in the lowest expected calibration error (Guo et al., 2017). Note that different values of T do not affect the classification accuracy because temperature scaling is accuracy preserving.
5 EXPERIMENTS
We follow the standard setup in few-shot learning literature. The model is trained on a meta-training dataset and hyper-parameters are selected based on the performance on a meta-validation dataset. The final performance of the model is evaluated on a meta-test dataset. The proposed method is applied to few-shot regression and classification problems and compared against a wide range of alternative methods.
5.1 FEW-SHOT REGRESSION RESULTS
Sine waves (Finn et al., 2017) and head pose estimation (Patacchiola et al., 2020) datasets are used to evaluate the performance of MFRL in few-shot regression. We use the same backbones in literature (Patacchiola et al., 2020) to make fair comparisons. Details of the few-shot regression experiments can be found in Appendix A.2.
The results for few-shot regression are summarized in Table 1. In the sine wave few-shot regression, MFRL outperforms all meta-learning methods, demonstrating that high-quality representation can be learned in supervised learning, without episodic meta-learning. Although DKT with a spectral mixture (SM) kernel achieves high accuracy, the good performance should be attributed to the strong inductive bias to periodic functions in the SM kernel (Wilson & Adams, 2013). Additional results for MFRL with different activation functions are reported in Appendix A.3. In the head pose estimation experiment, MFRL also achieves the best accuracy. In both few-shot regression problems, SWA results in improved accuracy, suggesting that SWA can improve the quality of features and facilitate the learning of downstream tasks. In Fig. 1, uncertainty is correctly estimated by the hierarchical Bayesian linear model with learned features using just 10 training samples.
5.2 FEW-SHOT CLASSIFICATION RESULTS
We conduct few-shot classification experiments on four widely used few-shot image recognition benchmarks: miniImageNet (Ravi & Larochelle, 2017), tieredImageNet (Ren et al., 2018), CIFARFS (Bertinetto et al., 2019), and FC100 (Oreshkin et al., 2018). In addition, we test our approach on a cross-domain few-shot classification task from the miniImageNet to CUB. The experiment details about the few-shot classification datasets can be found in Appendix A.2. The proposed method is applied to three widely used network architectures: ResNet-12 (Lee et al., 2019; Ravichandran et al., 2019), wide ResNet (WRN-28-10) (Dhillon et al., 2020; Rusu et al., 2019), and a 4-layer convolutional neural network with 64 channels (Chen et al., 2019; Patacchiola et al., 2020) (in Appendix A.3).
The results of the proposed method and previous SOTA methods using similar backbones are reported in Table 2 and 3. The proposed method achieves the best performance in most of the experiments when compared with previous SOTA methods. Our method is closely related to Baseline++ (Chen et al., 2019) and fine-tuning on logits (Dhillon et al., 2020). Baseline++ normalizes both classification weights and features, while the proposed method only normalizes features. It allows our method to find a more accurate model in a more flexible hypothesis space, given high-quality representation. Compared with fine-tuning on logits, our method obtains better results by learning
a new logistic regression model on features, which store richer information about the data. Some approaches pretrain a C-class classification model on all training data and then apply highly sophisticated meta-learning techniques to the pretrained model to achieve SOTA performance (Rusu et al., 2019; Sun et al., 2019). Our approach with SWA outperforms those pretrained-then-meta-learned models, which demonstrates that SWA obtains high-quality representation that generalizes well to unseen tasks. Compared with improving representation quality for few-shot classification via selfdistillation (Tian et al., 2020), the computational cost of SWA is significantly smaller because it does not require training models from scratch multiple times. Moreover, SWA can be applied to find good representation for both few-shot regression and classification, while previous transfer learning approaches can only handle few-shot classification problems (Mangla et al., 2020; Tian et al., 2020).
MFRL is also applied to the cross-domain few-shot classification task as summarized in Table 4. MFRL outperforms other methods in this challenging task, indicating that the learned representation has strong generalization capability. We use the same hyperparameters (training epochs, learning rate, learning rate in SWA, SWA epoch, etc.) as in Table 2. The strong results indicate that MFRL is robust to hyperparameter choice. Surprisingly, meta-learning methods with adaptive embeddings do not outperform simple transfer learning methods like Baseline++ when the domain gap between base classes and novel classes is large. We notice that Tian et al. (2020) also reports similar results that transfer learning methods show superior performance on a large-scale cross-domain few-shot classification dataset. We still believe that adaptive embeddings should be helpful when the domain gap between base and novel classes is large. Nevertheless, how to properly train a model to obtain useful adaptive embeddings in novel tasks is an open question.
5.3 EFFECTIVE RANK OF THE REPRESENTATION
The rank of representation defines the number of independent bases. For deep learning, noise in gradients and numerical imprecision can cause the resulting matrix to be full-rank. Therefore, simply counting the number of non-zero singular values may not be an effective way to measure the rank of the representation. To compare the effective ranks, we plot the normalized singular values of the representation of meta-test data in Fig. 2, where the representation with SWA has a faster decay in singular values, thus indicating the lower effective rank of the presentation with SWA. The results empirically verify our conjecture that SWA is an implicit regularizer towards low-rank representation.
0 100 200 300 400 500 600 Singular value index i
0.0
0.2
0.4
0.6
0.8
1.0
No rm
al ize
d sin
gu la
r v al
ue
i/ m
ax
miniImageNet w.o. SWA: ilog i = 68.49 SWA: ilog i = 58.49
0 100 200 300 400 500 600 Singular value index i
0.0
0.2
0.4
0.6
0.8
1.0
No rm
al ize
d sin
gu la
r v al
ue
i/ m
ax
tieredImageNet w.o. SWA: ilog i = 83.98 SWA: ilog i = 80.01
0 100 200 300 400 500 600 Singular value index i
0.0
0.2
0.4
0.6
0.8
1.0
No rm
al ize
d sin
gu la
r v al
ue
i/ m
ax
CIFAR-FS
w.o. SWA: ilog i = 40.94 SWA: ilog i = 33.63
0 100 200 300 400 500 600 Singular value index i
0.0
0.2
0.4
0.6
0.8
1.0
No rm
al ize
d sin
gu la
r v al
ue
i/ m
ax
FC100
w.o. SWA: ilog i = 40.22 SWA: ilog i = 34.34
Figure 2: Normalized singular values for representation with and without SWA. The metric
−
∑
σ̄i log σ̄i is used to measure the effective rank of the representation, where σ̄i = σi/σmax. Faster decay in singular values indicates that fewer dimensions capture the most variation in all dimensions, thus lower effective rank.
5.4 FEW-SHOT CLASSIFICATION RELIABILITY
The proposed method not only achieves high accuracy in few-shot classification but also makes the classification uncertainty well-calibrated. A reliability diagram can be used to check model calibration visually, which plots an identity function between prediction accuracy and confidence when the model is perfectly calibrated (DeGroot & Fienberg, 1983). Fig. 3 shows the classification reliability diagrams along with widely used metrics for uncertainty calibration, including expected calibration error (ECE) (Guo et al., 2017), maximum calibration error (MCE) (Naeini et al., 2015), and Brier score (BRI) (Brier, 1950). ECE measures the average binned difference between confidence and accuracy, while MCE measures the maximum difference. BRI is the squared error between the predicted probabilities and one-hot labels. MAML is over-confident because tuning a deep neural network on few-shot data is prone to over-fitting. Meanwhile, Proto Net and Matching Net are better calibrated than MAML because they do not fine-tune the entire network during testing. Nevertheless, they are still slightly over-confident. The results indicate that MFRL with a global temperature scaling factor can learn well-calibrated models from very limited training samples.
5.5 APPLICATION IN META-LEARNING
Meanwhile, we also apply SWA to episodic meta-learning methods, such Proto Net, MAML and Matching Net, to improve their classification accuracy. The results in Table 5 indicate that SWA can improve the few-shot classification accuracy in both transfer learning and episodic meta-learning. SWA is orthogonal to the learning paradigm and model architecture. Thus, SWA can be applied to a wide range of few-shot learning methods to improve accuracy.
Furthermore, the temperature scaling factor can be applied to calibrate meta-learning methods, including MAML, Proto Net, and Matching Net. The reliability diagrams in Fig. 3 indicate that the temperature scaling factor not only calibrates classification uncertainty of transfer learning approaches, such as the proposed MFRL, but also makes the classification uncertainty well-calibrated in episodic meta-learning methods. Therefore, the temperature scaling factor can be applied to a wide range of few-shot classification methods to get well-calibrated uncertainty, while preserving the classification accuracy.
6 DISCUSSION
SWA has been applied to supervised learning of deep neural networks (Izmailov et al., 2018; Athiwaratkun et al., 2019) and its effectiveness was attributed to convergence to a solution on the flat side of an asymmetric loss valley (He et al., 2019). However, it does not explain the effectiveness of SWA in few-shot learning because the meta-training and meta-testing losses are not comparable after the top layer is retrained by the few-shot support data in a meta-test task. The effectiveness of SWA in few-shot learning must be related to the property of the representation. Although our results empirically demonstrate that SWA results in low-rank representation, further research about their connection is needed.
Explicit regularizers can also be used to obtain simple input-output functions in deep neural networks and low-rank representation, including L1 regularization, nuclear norm, spectral norm, and Frobenius norm (Bartlett et al., 2017; Neyshabur et al., 2018; Sanyal et al., 2020). However, some of those explicit regularizers are not compatible with standard SGD training or are computationally expensive. In addition, it is difficult to choose the appropriate strength of explicit regularization. Too strong explicit regularization can bias towards simple solutions that do not fit the data. In comparison, SWA is an implicit regularizer that is completely compatible with the standard SGD training without much extra computational cost. Thus, it can be easily combined with transfer learning and meta-learning to obtain more accurate few-shot learning models. In parallel, SWA is also robust to the choice of the hyperparameters - the learning rate and training epochs in the SWA stage (see details in Appendix A.4).
7 CONCLUSIONS
In this article, we propose MFRL to obtain accurate and reliable few-shot learning models. SWA is an implicit regularizer towards low-rank representation, which generalizes well to unseen meta-test tasks. The proposed method can be applied to both classification and regression tasks. Extensive experiments show that our method not only outperforms other SOTA methods on various datasets but also correctly quantifies the uncertainty in prediction.
A APPENDIX
A.1 PSEUDO CODE FOR MFRL
Algorithm 1 Meta-free representation learning for few-shot learning
Merge all training tasks Dtr = {Dτ}Tτ=1 Initialize model parameters θ = [θf ,W] Maximize the likelihood on all training data p (Dtr | θ) using SGD
Minimize the squared loss for regression problems Minimize the cross-entropy loss for classification problems
Run SWA to obtain θSWA Discard W and freeze θf Learn a new top layer using support data D in a test task:
Learn a hierarchical Bayesian linear model for a regression task Learn a logistic regression model with the temperature scaling factor for a classification task
A.2 EXPERIMENT DETAILS
Sine waves are generated by y = A sin(x−ϕ)+ , where amplitudeA ∈ [0.1, 5.0], phase ϕ ∈ [0, π] and is white noise with standard deviation of 0.1 (Finn et al., 2017). Each sine wave contains 200 samples by sampling x uniformly from [−5.0, 5.0]. We generate 500 waves for training, validation and testing, respectively. All sine waves are different from each other. We use the same backbone network described in MAML (Finn et al., 2017): a two-layer MLP with 40 hidden units in each layer. We use the SGD optimizer with a learning rate of 10−3 over 8 × 104 training iterations and run SWA over 2× 104 training iterations with a learning rate of 0.05.
Head pose regression data is derived from the Queen Mary University of London multi-view face dataset (Gong et al., 1996). It contains images from 37 people and 133 facial images per person. Facial images cover a view sphere of 90◦ in yaw and 120◦ in tilt. The dataset is divided into 3192 training samples (24 people), 1064 validation samples (8 people), and 665 test samples (5 people). We use the same feature extractor described in literature (Patacchiola et al., 2020): a three-layer convolutional neural network, each with 36 output channels, stride 2, and dilation 2. We train the model on the training people set for 300 epochs using the SGD optimizer with a learning rate of 0.01 and run 25 epochs of SWA with a learning rate of 0.01.
miniImageNet is a 100-class subset of the original ImageNet dataset (Deng et al., 2009) for fewshot learning (Vinyals et al., 2016). Each class contains 600 images in RGB format of the size 84 × 84. miniImageNet is split into 64 training classes, 16 validation classes, and 20 testing classes, following the widely used data splitting protocol (Ravi & Larochelle, 2017).
tieredImageNet is another subset of the ImageNet dataset for few-shot learning (Ren et al., 2018). It contains 608 classes grouped into 34 categories, which are split into 20 training categories (351 classes), 6 validation categories (97 classes), and 8 testing categories (160 classes). Compared with miniImageNet, training classes in tieredImageNet are sufficiently distinct from test classes, making few-shot classification more difficult.
CIFAR-FS is a derivative of the original CIFAR-100 dataset by randomly splitting 100 classes into 64, 16, and 20 classes for training, validation, and testing, respectively (Bertinetto et al., 2019).
FC100 is another derivative of CIFAR-100 with minimized overlapped information between train classes and test classes by grouping the 100 classes into 20 superclasses (Oreshkin et al., 2018). They are further split into 60 training classes (12 superclasses), 20 validation classes (4 superclasses), and 20 test classes (4 superclasses).
miniImageNet to CUB is a cross-domain few-shot classification task, where the models are trained on miniImageNet and tested on CUB (Welinder et al., 2010). Cross-domain few-shot classification is more challenging due to the big domain gap between two datasets. We can better evaluate the generalization capability in different algorithms. We follow the experiment setup in Yue et al. (2020) and use the WRN2-28-10 as the backbone.
The backbone model is trained on all training classes using C-class cross-entropy loss by the SGD optimizer (momentum of 0.9 and weight decay of 1e-4) with a mini-batch size of 64. The learning rate is initialized as 0.05 and is decayed by 0.1 after 60, 80, and 90 epochs (100 epochs in total). After the SGD training converges, we run 100 epochs of SWA with a learning rate of 0.02. Note that MFRL is not sensitive to training epochs and learning rates in SWA (see Appendix A.4). The training images are augmented with random crop, random horizontal flip, and color jitter.
During testing, we conduct 5 independent runs of 600 randomly sampled few-shot classification tasks from test classes and calculate the average accuracy. Each task contains 5 classes, 1 × 5 or 5× 5 support samples, and 75 query samples. A logistic regression model is learned using only the support samples. The classification accuracy is evaluated on the query samples.
A.3 ADDITIONAL RESULTS ON FEW-SHOT REGRESSION AND CLASSIFICATION
The additional results on few-shot regression using different activation functions are reported in Table 6. MFRL achieves high accuracy with different activation functions.
The few-shot classification results using a 4-layer convolutional neural network (or similar architectures) are reported in Table 7 and 8. Similar to the results using ResNet-12 and WRN-28-10, the proposed method outperforms a wide range of meta-learning approaches. Our method is only
second to few-shot embedding adaptation with transformer (FEAT) (Ye et al., 2020) on miniImageNet dataset. Recently, meta-learned attention modules are built on top of the convolutional neural network to get improved few-shot classification accuracy. Direct comparison to those methods with attention modules (Ye et al., 2020; Fei et al., 2021; Zhang et al., 2021) may not be fair because recent studies show that transformer itself can achieve better results than convolutional neural networks in image classification (Dosovitskiy et al., 2021). It is difficult to determine whether the performance improvement is due to the meta-learning algorithm or the attention modules. To make a fair comparison, we add convolutional block attention modules (Woo et al., 2018) on top of ResNet12 features (before global average pooling). As shown in Fig. 9, MFRL with attention modules achieves comparable results with MELR and IEPT.
The uncertainty calibration results of MFRL with the temperature scaling factor are presented in Fig. 5. The prediction confidence aligns well with the prediction accuracy. It demonstrates that MFRL with the temperature scaling factor results in well calibrated models.
A.4 SENSITIVITY OF MFRL
The performance of MFRL is not sensitive to learning rates in SWA. As shown in Fig. 6, the representation learned by SWA generalizes better than the one from standard SGD, as long as the learning rate in SWA is in a reasonable range. In addition, the prediction accuracy on meta-test tasks keeps stable even after running SWA for many epochs on the training data. Therefore, MFRL is not sensitive to training epochs. This desirable property makes the proposed method easy to use when solving few-shot learning problems in practice.
A.5 COMPARISON WITH EXPONENTIAL MOVING AVERAGING
Exponential moving average (EMA) decays the importance of model weights from early training epochs exponentially. Let θavg ← aθavg + (1− a) θnew. We try EMA with different values of a. In Table 10, EMA improves the performance when a is within a reasonable range. Note that EMA introduces one extra hyperparameter, the forgetting factor. It makes EMA less desirable in practice.
A.6 HIERARCHICAL BAYESIAN LINEAR CLASSIFICATION MODEL
Similar to the hierarchical Bayesian linear regression model, the prior distribution over w is p(w | λ) = ∏p i=0N (wi | 0, λ), where λ is the precision in the Gaussian prior. The hyperprior on λ is defined as p(λ) = Gamma (λ | a, b). The posterior over all latent variables given the data is
p (w, λ | X,y) = p (y | X,w, λ) p (w | λ) p(λ) p(y | X)
(7)
MCMC sampling (Hoffman & Gelman, 2014) is used to avoid potential deterioration in predictive performance due to approximated inference. A flat and non-informative hyperprior (a = b = 10−6) is used because no prior knowledge is available. In Table 11, the hierarchical Bayesian linear classification model achieves slightly worse performance than the logistic regression model. However, the classification model is not well calibrated, as shown in Fig. 7.
After fine-tuning a and b using the meta-validation data, it is possible to get better calibrated classification models on test tasks. Besides, the classification accuracy is still slightly worse than logistic regression after hyperparameter tuning. Our observations align with a recent study, which shows that the Bayesian classification model cannot achieve similar performance to the non-Bayesian counterpart without tempering the posterior (Wenzel et al., 2020). We do not further experiment tempered posterior in the hierarchical Bayesian linear classification model because it introduces an extra temperature hyperparameter that requires tuning. The original purpose of introducing the hierarchical Bayesian model is to get an accurate and well calibrated classification model without hyperparameter tuning. Consequently, the hierarchical Bayesian model is not used in few-shot classification in that hierarchical Bayesian linear classification models cannot achieve high accuracy and good uncertainty calibration from a non-informative hyperprior. If hyperparameter tuning is inevitable, it is much easier to tune a logistic regression model with a temperature scaling factor, compared with tuning a hierarchical Bayesian model. Furthermore, the computational cost of learning a hierarchical Bayesian linear classification model via MCMC sampling is much larger than that of learning a logistic regression model. | 1. What are the concerns regarding the opening remarks of the paper, particularly on the statement about episodic meta-learning and its limitations?
2. How does the proposed method address the shortcomings of episodic meta-learning, such as slow convergence, overfitting, and complexity?
3. What is the validity of the statement regarding transfer learning and learned representation, and how does it relate to recent studies in few-shot learning?
4. What is the novelty of the proposed method, especially in terms of fine-tuning probabilistic linear models in the meta-test phase without any learning mechanisms related to meta-training or representation learning phase?
5. Are there any typos or errors in the main text that need to be addressed?
6. How can the methodology part be rewritten to make it clearer and more concise, especially regarding the usage of SWA as a regularizer and the adaptation of temperature scaling for meta-test phase?
7. Why are the few-shot classification results not fair, and what can be done to improve the comparison with recent algorithms like MELR? | Summary Of The Paper
Review | Summary Of The Paper
This paper proposes a transfer learning method for few-shot regression/classification. In representation learning, it adapts the stochastic weight averaging (SWA) as a regularizer for learning more generalizable features. In fine-tuning phase (adaption to a new task in the meta-test phase), it treats the regression and classification differently. For regression a hierarchical Bayse is used while in classification, a conventional method is merged with temperature scaling. This work achieves comparable results to SOTA, however, they are major concerns that need to be addressed.
Review
There are several concerns that needs to be addressed, and I believe addressing these can improve the quality of this paper.
There are some typos in the main text, like Prot Net in 5.5.
I have some major concerns about the opening remarks of this paper listed below:
In the introduction, the authors have mentioned that : “Despite the success of episodic meta-learning in few-shot learning tasks, they are slow to converge, prone to over-fitting, and tricky to implement (Antoniou et al., 2018 [1]).”
I think this statement is not generally true, as these are the shortcoming of the MAML algorithm listed in [1], and most of them are addressed in this paper as MAML++. Also, as an example recent metric-based methods (ProtoNet, RelationNet, FT for cross-domain FSL [2]) does not suffer from these issues that much, at least when comparing to the transfer learning methods like Chen et al. [3] and Dhillon et al [4].
For example, ProtoNet, has a relatively fast convergence and increasing the ways in meta-training is proved to resolve overfitting. There are also several recent extensions, and reqularizers for few-shot learning that address these problems well.
In the later part of the introduction, you also mentioned that your algorithm tries to address these shortcomings: “In this paper, we propose a new method that does not rely on meta-training to overcome the limitations of commonly used episodic meta-learning approaches in few-shot learning.”. But, there is no evaluation regarding faster convergence, preventing overfitting, and simpler implementation of your algorithm compared to recent episodic ones.
I also doubt the validity of this statement in the introduction: “recent studies on transfer learning (Chen et al., 2019; Tian et al., 2020) cast doubt on whether it is the episodic meta-learning algorithm or the learned representation that is responsible for fast adaption to new tasks”.
Specifically, I am not so sure about differentiating episodic meta-learning and learning better features. As an example, Goldblum et al. [5] show that meta-learned features have a better generalization compared to conventionally trained networks with exactly the same structure. So, using the meta-learning algorithms leads to the better representation learning and these two are not to independent concepts.
What do you exactly mean by this?
“Our method is the first one to achieve well-calibrated few-shot models by only fine-tuning probabilistic linear models in the meta-test phase, without any learning mechanisms related to the meta-training or representation learning phase.”
This mechanism is the well-known solution in transfer learning methods which is firstly popularized by [3] for few-shot learning. Am I missing something here?
Considering all these problems, I think the authors need to rewrite the introduction part and make it more reliable, considering the recent studies in few-shot learning. There are also some concerns for the Methodology part:
Based on my understanding, your representation learning part is similar to [3], and the only difference is the usage of SWA as post-processinf of the network parameters. Although it is mentioned that the purpose is to find a low-rank approximation, it is not clear that how do you finally deploy it in your algorithm. The algorithm is unclear and seems to be lost in the explanation of various concepts!
Are you using a simple averaging (mentioned in equation (1)) as a regularizer and just describing that this implicitly steers the model towards low-rank representation?
For the fine-tuning in few-shot classification, it is not clear for me how logistic regression model fits within your framework.
I think adapting the temperature scaling for meta-test phase is the only contribution for fine-tuning few-shot classification. Is this new for few-shot learning to be considered as new algorithm?
Overall, I found it very hard to follow the methodology as in some cases, the main flow of the algorithm is lost due to overexplanation of other concepts.
Regarding experiments:
The few-shot regression results are interesting, but the few-shot classification results are not fair, because some recent algorithms like MELR [6] are not included in the comparison.
References:
[1] How to train your MAML, ICLR 2018.
[2] Cross-Domain Few-shot Classification via Learned Features-wise Transformation, ICLR 2020.
[3] A closer look at few-shot classification, ICLR 2019.
[4] A Baseline for Few-Shot Image Classification, ICLR 2020.
[5] Unraveling Meta-Learning: Understanding Feature Representations for Few-Shot Tasks, ICML 2020.
[6] MELR: Meta-Learning via Modeling Episode-Level Relationships for Few-shot Learning, ICLR 2021. |
ICLR | Title
Meta-free few-shot learning via representation learning with weight averaging
Abstract
Recent studies on few-shot classification using transfer learning pose challenges to the effectiveness and efficiency of episodic meta-learning algorithms. Transfer learning approaches are a natural alternative, but they are restricted to few-shot classification. Moreover, little attention has been on the development of probabilistic models with well-calibrated uncertainty from few-shot samples, except for some Bayesian episodic learning algorithms. To tackle the aforementioned issues, we propose a new transfer learning method to obtain accurate and reliable models for few-shot regression and classification. The resulting method does not require episodic meta-learning and is called meta-free representation learning (MFRL). MFRL first finds low-rank representation generalizing well on meta-test tasks. Given the learned representation, probabilistic linear models are fine-tuned with few-shot samples to obtain models with well-calibrated uncertainty. The proposed method not only achieves the highest accuracy on a wide range of few-shot learning benchmark datasets but also correctly quantifies the prediction uncertainty. In addition, weight averaging and temperature scaling are effective in improving the accuracy and reliability of few-shot learning in existing meta-learning algorithms with a wide range of learning paradigms and model architectures.
N/A
Recent studies on few-shot classification using transfer learning pose challenges to the effectiveness and efficiency of episodic meta-learning algorithms. Transfer learning approaches are a natural alternative, but they are restricted to few-shot classification. Moreover, little attention has been on the development of probabilistic models with well-calibrated uncertainty from few-shot samples, except for some Bayesian episodic learning algorithms. To tackle the aforementioned issues, we propose a new transfer learning method to obtain accurate and reliable models for few-shot regression and classification. The resulting method does not require episodic meta-learning and is called meta-free representation learning (MFRL). MFRL first finds low-rank representation generalizing well on meta-test tasks. Given the learned representation, probabilistic linear models are fine-tuned with few-shot samples to obtain models with well-calibrated uncertainty. The proposed method not only achieves the highest accuracy on a wide range of few-shot learning benchmark datasets but also correctly quantifies the prediction uncertainty. In addition, weight averaging and temperature scaling are effective in improving the accuracy and reliability of few-shot learning in existing meta-learning algorithms with a wide range of learning paradigms and model architectures.
1 INTRODUCTION
Currently, the vast majority of few-shot learning methods are within the general paradigm of metalearning (a.k.a. learning to learn) (Bengio et al., 1991; Schmidhuber, 1987; Thrun & Pratt, 1998), which learns multiple tasks in an episodic manner to distill transferrable knowledge (Vinyals et al., 2016; Finn et al., 2017; Snell et al., 2017). Although many episodic meta-learning methods report state-of-the-art (SOTA) performance, recent studies show that simple transfer learning methods with fixed embeddings (Chen et al., 2019; Tian et al., 2020) can achieve similar or better performance in few-shot learning. It is found that the effectiveness of optimization-based meta-learning algorithms is due to reusing high-quality representation, instead of rapid learning of task-specific representation (Raghu et al., 2020). The quality of the presentation is not quantitatively defined, except for some empirical case studies (Goldblum et al., 2020). Recent machine learning theories (Saunshi et al., 2021) indicate that low-rank representation leads to better sample efficiency in learning a new task. However, those theoretical studies are within the paradigm of meta-learning and do not reveal how to obtain low-rank representation for few-shot learning outside the realm of meta-learning. This motivates us to investigate ways to improve the representation for adapting to new few-shot tasks in a meta-free manner by taking the advantage of simplicity and robustness in transfer learning.
In parallel, existing transfer learning methods also have limitations. That is, the existing transfer learning methods may not find representation generalizing well to unseen few-shot tasks (Chen et al., 2019; Dhillon et al., 2020) , compared with state-of-the-art meta-learning methods (Ye et al., 2020; Zhang et al., 2020). Although some transfer learning methods utilize knowledge distillation and self-supervised training to achieve strong performance in few-shot classification, they are restricted to few-shot classification problems (Mangla et al., 2020; Tian et al., 2020). To the best of our knowledge, no transfer learning method is developed to achieve similar performance to metalearning in few-shot regression. As such, it is desirable to have a transfer learning method that finds high-quality representation generalizing well to unseen classification and regression problems.
The last limitation of the existing transfer learning methods in few-shot learning is the lack of uncertainty calibration. Uncertainty quantification is concerned with the quantification of how likely certain outcomes are. Despite a plethora of few-shot learning methods (in fact, machine learning in general) to improve the point estimation accuracy, few methods are developed to get probabilistic models with improved uncertainty calibration by integrating Bayesian learning into episodic metatraining (Grant et al., 2018; Finn et al., 2018; Yoon et al., 2018; Snell & Zemel, 2021). Few-shot learning models can be used in risk-averse applications such as medical diagnosis (Prabhu et al., 2019). The diagnosis decision is made on not only point estimation but also probabilities associated with the prediction. The risk of making wrong decisions is significant when using uncalibrated models (Begoli et al., 2019). Thus, the development of proper fine-tuning steps to achieve well-calibrated models is the key towards practical applications of transfer learning in few-shot learning.
In this paper, we develop a simple transfer learning method as our own baseline to allow easy regularization towards more generalizable representation and calibration of prediction uncertainty. The regularization in the proposed transfer learning method works for regression and classification problems so that we can handle both problems within a common architecture. The calibration procedure is easily integrated into the developed transfer learning method to obtain few-shot learning models with good uncertainty quantification. Therefore, the resulting method, called Meta-Free Representation Learning (MFRL), overcomes the aforementioned limitations in existing transfer learning methods for few-shot learning. Our empirical studies demonstrate that the relatively overlooked transfer learning method can achieve high accuracy and well-calibrated uncertainty in few-shot learning when it is combined with the proper regularization and calibration. Those two tools are also portable to meta-learning methods to improve accuracy and calibration, but the improvement is less significant compared with that of transfer learning.
We use stochastic weight averaging (SWA) (Izmailov et al., 2018), which is agnostic to loss function types, as implicit regularization to improve the generalization capability of the representation. We also shed light on that the effectiveness of SWA is due to its bias towards low-rank representation. To address the issue of uncertainty quantification, we fine-tune appropriate linear layers during the meta-test phase to get models with well-calibrated uncertainty. In MFRL, hierarchical Bayesian linear models are used to properly capture the uncertainty from very limited training samples in few-shot regression, whereas the softmax output is scaled with a temperature parameter to make the few-shot classification model well-calibrated. Our method is the first one to achieve well-calibrated few-shot models by only fine-tuning probabilistic linear models in the meta-test phase, without any learning mechanisms related to the meta-training or representation learning phase.
Our contributions in this work are summarized as follows:
• We propose a transfer learning method that can handle both few-shot regression and classification problems with performance exceeding SOTA.
• For the first time, we empirically find the implicit regularization of SWA towards low-rank representation, which is a useful property in transferring to few-shot tasks.
• The proposed method results in well-calibrated uncertainty in few-shot learning models while preserving SOTA accuracy.
• The implicit regularization of SWA and temperature scaling factor can be applied to existing meta-learning methods to improve their accuracy and reliability in few-shot learning.
2 RELATED WORK
Episodic meta-learning approaches can be categorized into metric-based and optimization-based methods. Metric-based methods project input data to feature vectors through nonlinear embeddings and compare their similarity to make the prediction. Examples of similarity metrics include the weighted L1 metric (Koch et al., 2015), cosine similarity (Qi et al., 2018; Vinyals et al., 2016), and Euclidean distance to class-mean representation (Snell et al., 2017). Instead of relying on predefined metrics, learnable similarity metrics are introduced to improve the few-shot classification performance (Oreshkin et al., 2018; Sung et al., 2018). Recent metric-based approaches focus on developing task-adaptive embeddings to improve few-shot classification accuracy. Those task-adaptive embeddings include attention mechanisms for feature transformation (Fei et al., 2021; Gidaris & Komodakis, 2018; Ye et al., 2020; Zhang et al., 2021), graph neural networks (Garcia & Estrach,
2018), implicit class representation (Ravichandran et al., 2019), and task-dependent conditioning (Oreshkin et al., 2018; Yoon et al., 2020; 2019). Although metric-based approaches achieve strong performance in few-shot classification, they cannot be directly applied to regression problems.
Optimization-based meta-learning approaches try to find transferrable knowledge and adapt to new tasks quickly. An elegant and powerful meta-learning approach, termed model-agnostic metalearning (MAML), solves a bi-level optimization problem to find good initialization of model parameters (Finn et al., 2017). However, MAML has a variety of issues, such as sensitivity to neural network architectures, instability during training, arduous hyperparameter tuning, and high computational cost. On this basis, some follow-up methods have been developed to simplify, stabilize and improve the training process of MAML (Antoniou et al., 2018; Flennerhag et al., 2020; Lee & Choi, 2018; Nichol et al., 2018; Park & Oliva, 2019). In practice, it is very challenging to learn high-dimensional model parameters in a low-data regime. Latent embedding optimization (LEO) attempts to learn low-dimensional representation to generate high-dimensional model parameters (Rusu et al., 2019). Meanwhile, R2-D2 (Bertinetto et al., 2019) and MetaOptNet (Lee et al., 2019) reduce the dimensionality of trainable model parameters by freezing feature extraction layers during inner loop optimization. Note that the proposed method is fundamentally different from R2-D2 and MetaOptNet because our method requires neither episodic meta-learning nor bi-level optimization.
Transfer learning approaches first learn a feature extractor on all training data through standard supervised learning, and then fine-tune a linear predictor on top of the learned feature extractor in a new task (Chen et al., 2019). However, vanilla transfer learning methods for few-shot learning do not take extra steps to make the learned representation generalizing well to unseen meta-test tasks. Some approaches in this paradigm are developed to improve the quality of representation and boost the accuracy of few-shot classification, including cooperative ensembles (Dvornik et al., 2019), knowledge distillation (Tian et al., 2020), and auxiliary self-supervised learning (Mangla et al., 2020). Nevertheless, those transfer learning methods are restricted to few-shot classification. MFRL aims to find representation generalizing well from the perspective of low-rank representation learning, which is supported by recent theoretical studies (Saunshi et al., 2021). Furthermore, MFLR is the first transfer learning method that can handle both few-shot regression and classification problems and make predictions with well-calibrated uncertainty.
3 BACKGROUND
3.1 EPISODIC META-LEARNING
In episodic meta-learning, the meta-training data contains T episodes or tasks, where the τ th episode consists of dataDτ = {(xτ,j ,yτ,j)}Nτj=1 withNτ samples. Tasks and episodes are used interchangeably in the rest of the paper. Episodic meta-learning algorithms aim to find common model parameters θ which can be quickly adapted to task-specific parameters φτ (τ = 1, ..., T ). For example, MAML-type algorithms assume φτ is one or a few gradient steps away from θ (Finn et al., 2017; 2018; Grant et al., 2018; Yoon et al., 2018), while other meta-learning approaches assume that φτ and θ share the parameters in the feature extractor and only differ in the top layer (Bertinetto et al., 2019; Lee et al., 2019; Snell et al., 2017).
3.2 STOCHASTIC WEIGHT AVERAGING
The idea of stochastic weight averaging (SWA) along the trajectory of SGD goes back to Polyak–Ruppert averaging (Polyak & Juditsky, 1992). Theoretically, weight averaging results in faster convergence for linear models in supervised learning and reinforcement learning(Bach & Moulines, 2013; Lakshminarayanan & Szepesvari, 2018). In deep learning, we are more interested in tail stochastic weight averaging (Jain et al., 2018), which averages the weights after T training epochs. The averaged model parameters θSWA can be computed by running s additional training epochs using SGD
θSWA = 1
s T+s∑ i=T+1 θi, (1)
where θi denotes the model parameters at the end of the i-th epoch. SWA has been applied to supervised learning of deep neural neural networks to achieve higher test accuracy (Izmailov et al., 2018).
4 METHODOLOGY
The proposed method is a two-step learning algorithm: meta-free representation learning followed by fine-tuning. We employ SWA to make the learned representation low-rank and better generalize to meta-test data. Given a meta-test task, a new top layer is fine-tuned with few-shot samples to obtain probabilistic models with well-calibrated uncertainty. Note that MFRL can be used for both regression and classification depending on the loss function. The pseudocode of MFRL is presented in Appendix A.1.
4.1 REPRESENTATION LEARNING
Common representation can be learned via maximization of the likelihood of all training data with respect to θ rather than following episodic meta-learning. To do so, we group the data Dτ = {(xτ,j ,yτ,j)}Nτj=1 from all meta-training tasks into a single dataset Dtr. Given aggregated training dataDtr = {X,Y}, representation can be learned by maximizing the likelihood p (Dtr | θ) with respect to θ. Let θ = [θf ,W], where θf represents parameters in the feature extractor and W denotes the parameters in the top linear layer. The feature extractor h(x) ∈ Rp is a neural network parameterized by θf and outputs a feature vector of dimension p. The specific form of the loss function depends on whether the task is regression or classification and can be given as follows:
LRP (θ) = − log p (Dtr | θ) = { LMSE(θ), regression LCE(θ), classification
where
LMSE(θ) = 1
2N ′ T∑ τ=1 Nτ∑ j=1 ( yτ,j −w>τ h (xτ,j) )2 , (2)
LCE(θ) = − N ′∑ j=1 C∑ c=1 yj,c log exp(w>c h(xj))∑C c′=1 exp(w > c′h(xj))
(3)
For regression problems, the model learns T regression tasks (W = [w1, ...,wT ] ∈ R(p+1)×T ) simultaneously using the loss function LMSE given in Eq. 2, whereas the model learns a C-class classification model 1 (W = [w1, ...,wC ] ∈ R(p+1)×C) for classification problems using the loss function LCE in Eq. 3. The loss function - either Eq. 2 or 3 - can be minimized through standard stochastic gradient descent, where N ′ = ∑T τ=1Nτ is the total number of training samples.
Post-processing via SWA Minimizing the loss functions in Eq. 2 and 3 by SGD may not necessarily result in representation that generalizes well to few-shot learning tasks in the meta-test set. The last hidden layer of a modern deep neural network is high-dimensional and may contain spurious features that over-fit the meta-training data. Recent meta-learning theories indicate that better sample complexity in learning a new task can be achieved via low-rank representation, whose singular values decay faster (Saunshi et al., 2021). We aim to find low-rank representation Φ = h(X) without episodic meta-learning, which is equivalent to finding the conjugate kernel KC = ΦΦ> with fast decaying eigenvalues. To link the representation with the parameter space, we can linearize the neural network by the first-order Taylor expansion at θT and get the finite width neural tangent kernel (NTK) KNTK(X,X) = J(X)J(X)>, where J(X) = ∇θfθ(X) ∈ RN
′×|θ| is the Jacobian matrix, and KNTK is a composite kernel containing KC (Fan & Wang, 2020). The distributions of eigenvalues for KNTK and KC are empirically similar. Analyzing KNTK could shed light on the properties of KC. In parallel, KNTK shares the same eigenvalues of the Gauss-Newton matrix G = 1N ′ J(X) >J(X). For linearized networks with squared loss, the Gauss-Newton matrix G well
1C is the total number of classes in Dtr. Learning a C-class classification model solves all possible tasks in the meta-training dataset because each task Dτ only contains a subset of C classes.
approximates the Hessian matrix H when y is well-described by fθ(x) (Martens, 2020). This is the case when SGD converges to θT within a local minimum basin. A Hessian matrix with a lot of small eigenvalues corresponds to a flat minimum, where the loss function is less sensitive to the perturbation of model parameters (Keskar et al., 2017). It is known that averaging the weights after SGD convergence in a local minimum basin pushes θT towards the flat side of the loss valley (He et al., 2019). As a result, SWA could result in a faster decay of eigenvalues in the kernel matrix, and thus low-rank representation. Our conjecture about SWA as implicit regularization towards low-rank representation is empirically verified in Section 5.
4.2 FINE-TUNING
After representation learning is complete, W is discarded and θf is frozen in a new few-shot task. Given the learned representation, we train a new probabilistic top layer in a meta-test task using fewshot samples. The new top layer will be configured differently depending on whether the few-shot task is a regression or a classification problem.
In a few-shot regression task, we learn a new linear regression model y = w>h(x) + on a fixed feature extractor h(x) ∈ Rp with few-shot training data D = {(xi, yi)}ni=1, where w denotes the model parameters and is Gaussian noise with zero mean and variance σ2. To avoid interpolation on few-shot training data (n p), a Gaussian prior p(w | λ) = ∏p i=0N (wi | 0, λ) is placed over w, where λ is the precision in the Gaussian prior. However, it is difficult to obtain an appropriate value for λ in a few-shot regression task because no validation data is available in D. Hierarchical Bayesian linear models can be used to obtain optimal regularization strength and grounded uncertainty estimation using few-shot training data only. To complete the specification of the hierarchical Bayesian model, the hyperpriors on λ and σ2 are defined as p(λ) = Gamma (λ | a, b) and p(σ−2) = Gamma(σ−2 | c, d), respectively. The hyper-priors become very flat and non-informative when a, b, c and d are set to very small values. The posterior over all latent variables given the data is p ( w, λ, σ2 | X,y ) , where X = {x}ni=1 and y = {yi}ni=1. However, the
posterior distribution p ( w, λ, σ2 | X,y ) is intractable. The iterative optimization based approximate inference (Tipping, 2001) is chosen because it is highly efficient. The point estimation for λ and σ2 is obtained by maximizing the marginal likelihood function p ( y | X, λ, σ2 ) . The posterior
of model parameters p ( w | X,y, λ, σ2 ) is calculated using the estimated λ and σ2. Previous two steps are repeated alternately until convergence.
The predictive distribution for a new sample x∗ is
p ( y∗ | x∗,X,y, λ, σ2 ) = ∫ p ( y∗ | x∗,w, σ2 ) p ( w | X,y, λ, σ2 ) dw, (4)
which can be computed analytically because both distributions on the right hand side of Eq. 4 are Gaussian. Consequently, hierarchical Bayesian linear models avoids over-fitting on few-shot training data and quantifies predictive uncertainty.
In a few-shot classification task, a new logistic regression model is learned with the post-processed representation. A typical K-way n-shot classification task D = {(xi, yi)}nKi=1 consists of K classes (different from meta-training classes) and n training samples per class. Minimizing an un-regularized cross-entropy loss results in a significantly over-confident classification model because the norm of logistic regression model parameters W ∈ R(p+1)×K becomes very large when few-shot training samples can be perfectly separated in the setting nK p. A weighted L2 regularization term is added to the cross-entropy loss to mitigate the issue
L(W) = − nK∑ i=1 K∑ c=1 yi,c log exp(w>c h(xi))∑K c′=1 exp(w > c′h(xi)) + λ K∑ c=1 w>c wc, (5)
where λ is the regularization coefficient, which affects the prediction accuracy and uncertainty. It is difficult to select an appropriate value of λ in each of the meta-test tasks due to the lack of validation data in D. We instead treat λ as a global hyper-parameter so that the value of λ should be determined based on the accuracy on meta-validation data. Note that the selected λ with high validation accuracy does not necessarily lead to well calibrated classification models. As such, we introduce the temperature scaling factor (Guo et al., 2017) as another global hyper-parameter to
scale the softmax output. Given a test sample x∗, the predicted probability for class c becomes
pc = exp(w>c h(x∗)/T )∑K c′=1 exp(w > c′h(x∗)/T ) , (6)
where T is the temperature scaling factor. In practice, we select the L2 regularization coefficient λ and the temperature scaling factor T as follows. At first, we set T to 1, and do grid search on the meta-validation data to find the λ resulting in the highest meta-validation accuracy. However, finetuning λ does not ensure good calibration. It is the temperature scaling factor that ensures the good uncertainty calibration. Similarly, we do grid search of T on the meta-validation set, and choose the temperature scaling factor resulting in the lowest expected calibration error (Guo et al., 2017). Note that different values of T do not affect the classification accuracy because temperature scaling is accuracy preserving.
5 EXPERIMENTS
We follow the standard setup in few-shot learning literature. The model is trained on a meta-training dataset and hyper-parameters are selected based on the performance on a meta-validation dataset. The final performance of the model is evaluated on a meta-test dataset. The proposed method is applied to few-shot regression and classification problems and compared against a wide range of alternative methods.
5.1 FEW-SHOT REGRESSION RESULTS
Sine waves (Finn et al., 2017) and head pose estimation (Patacchiola et al., 2020) datasets are used to evaluate the performance of MFRL in few-shot regression. We use the same backbones in literature (Patacchiola et al., 2020) to make fair comparisons. Details of the few-shot regression experiments can be found in Appendix A.2.
The results for few-shot regression are summarized in Table 1. In the sine wave few-shot regression, MFRL outperforms all meta-learning methods, demonstrating that high-quality representation can be learned in supervised learning, without episodic meta-learning. Although DKT with a spectral mixture (SM) kernel achieves high accuracy, the good performance should be attributed to the strong inductive bias to periodic functions in the SM kernel (Wilson & Adams, 2013). Additional results for MFRL with different activation functions are reported in Appendix A.3. In the head pose estimation experiment, MFRL also achieves the best accuracy. In both few-shot regression problems, SWA results in improved accuracy, suggesting that SWA can improve the quality of features and facilitate the learning of downstream tasks. In Fig. 1, uncertainty is correctly estimated by the hierarchical Bayesian linear model with learned features using just 10 training samples.
5.2 FEW-SHOT CLASSIFICATION RESULTS
We conduct few-shot classification experiments on four widely used few-shot image recognition benchmarks: miniImageNet (Ravi & Larochelle, 2017), tieredImageNet (Ren et al., 2018), CIFARFS (Bertinetto et al., 2019), and FC100 (Oreshkin et al., 2018). In addition, we test our approach on a cross-domain few-shot classification task from the miniImageNet to CUB. The experiment details about the few-shot classification datasets can be found in Appendix A.2. The proposed method is applied to three widely used network architectures: ResNet-12 (Lee et al., 2019; Ravichandran et al., 2019), wide ResNet (WRN-28-10) (Dhillon et al., 2020; Rusu et al., 2019), and a 4-layer convolutional neural network with 64 channels (Chen et al., 2019; Patacchiola et al., 2020) (in Appendix A.3).
The results of the proposed method and previous SOTA methods using similar backbones are reported in Table 2 and 3. The proposed method achieves the best performance in most of the experiments when compared with previous SOTA methods. Our method is closely related to Baseline++ (Chen et al., 2019) and fine-tuning on logits (Dhillon et al., 2020). Baseline++ normalizes both classification weights and features, while the proposed method only normalizes features. It allows our method to find a more accurate model in a more flexible hypothesis space, given high-quality representation. Compared with fine-tuning on logits, our method obtains better results by learning
a new logistic regression model on features, which store richer information about the data. Some approaches pretrain a C-class classification model on all training data and then apply highly sophisticated meta-learning techniques to the pretrained model to achieve SOTA performance (Rusu et al., 2019; Sun et al., 2019). Our approach with SWA outperforms those pretrained-then-meta-learned models, which demonstrates that SWA obtains high-quality representation that generalizes well to unseen tasks. Compared with improving representation quality for few-shot classification via selfdistillation (Tian et al., 2020), the computational cost of SWA is significantly smaller because it does not require training models from scratch multiple times. Moreover, SWA can be applied to find good representation for both few-shot regression and classification, while previous transfer learning approaches can only handle few-shot classification problems (Mangla et al., 2020; Tian et al., 2020).
MFRL is also applied to the cross-domain few-shot classification task as summarized in Table 4. MFRL outperforms other methods in this challenging task, indicating that the learned representation has strong generalization capability. We use the same hyperparameters (training epochs, learning rate, learning rate in SWA, SWA epoch, etc.) as in Table 2. The strong results indicate that MFRL is robust to hyperparameter choice. Surprisingly, meta-learning methods with adaptive embeddings do not outperform simple transfer learning methods like Baseline++ when the domain gap between base classes and novel classes is large. We notice that Tian et al. (2020) also reports similar results that transfer learning methods show superior performance on a large-scale cross-domain few-shot classification dataset. We still believe that adaptive embeddings should be helpful when the domain gap between base and novel classes is large. Nevertheless, how to properly train a model to obtain useful adaptive embeddings in novel tasks is an open question.
5.3 EFFECTIVE RANK OF THE REPRESENTATION
The rank of representation defines the number of independent bases. For deep learning, noise in gradients and numerical imprecision can cause the resulting matrix to be full-rank. Therefore, simply counting the number of non-zero singular values may not be an effective way to measure the rank of the representation. To compare the effective ranks, we plot the normalized singular values of the representation of meta-test data in Fig. 2, where the representation with SWA has a faster decay in singular values, thus indicating the lower effective rank of the presentation with SWA. The results empirically verify our conjecture that SWA is an implicit regularizer towards low-rank representation.
0 100 200 300 400 500 600 Singular value index i
0.0
0.2
0.4
0.6
0.8
1.0
No rm
al ize
d sin
gu la
r v al
ue
i/ m
ax
miniImageNet w.o. SWA: ilog i = 68.49 SWA: ilog i = 58.49
0 100 200 300 400 500 600 Singular value index i
0.0
0.2
0.4
0.6
0.8
1.0
No rm
al ize
d sin
gu la
r v al
ue
i/ m
ax
tieredImageNet w.o. SWA: ilog i = 83.98 SWA: ilog i = 80.01
0 100 200 300 400 500 600 Singular value index i
0.0
0.2
0.4
0.6
0.8
1.0
No rm
al ize
d sin
gu la
r v al
ue
i/ m
ax
CIFAR-FS
w.o. SWA: ilog i = 40.94 SWA: ilog i = 33.63
0 100 200 300 400 500 600 Singular value index i
0.0
0.2
0.4
0.6
0.8
1.0
No rm
al ize
d sin
gu la
r v al
ue
i/ m
ax
FC100
w.o. SWA: ilog i = 40.22 SWA: ilog i = 34.34
Figure 2: Normalized singular values for representation with and without SWA. The metric
−
∑
σ̄i log σ̄i is used to measure the effective rank of the representation, where σ̄i = σi/σmax. Faster decay in singular values indicates that fewer dimensions capture the most variation in all dimensions, thus lower effective rank.
5.4 FEW-SHOT CLASSIFICATION RELIABILITY
The proposed method not only achieves high accuracy in few-shot classification but also makes the classification uncertainty well-calibrated. A reliability diagram can be used to check model calibration visually, which plots an identity function between prediction accuracy and confidence when the model is perfectly calibrated (DeGroot & Fienberg, 1983). Fig. 3 shows the classification reliability diagrams along with widely used metrics for uncertainty calibration, including expected calibration error (ECE) (Guo et al., 2017), maximum calibration error (MCE) (Naeini et al., 2015), and Brier score (BRI) (Brier, 1950). ECE measures the average binned difference between confidence and accuracy, while MCE measures the maximum difference. BRI is the squared error between the predicted probabilities and one-hot labels. MAML is over-confident because tuning a deep neural network on few-shot data is prone to over-fitting. Meanwhile, Proto Net and Matching Net are better calibrated than MAML because they do not fine-tune the entire network during testing. Nevertheless, they are still slightly over-confident. The results indicate that MFRL with a global temperature scaling factor can learn well-calibrated models from very limited training samples.
5.5 APPLICATION IN META-LEARNING
Meanwhile, we also apply SWA to episodic meta-learning methods, such Proto Net, MAML and Matching Net, to improve their classification accuracy. The results in Table 5 indicate that SWA can improve the few-shot classification accuracy in both transfer learning and episodic meta-learning. SWA is orthogonal to the learning paradigm and model architecture. Thus, SWA can be applied to a wide range of few-shot learning methods to improve accuracy.
Furthermore, the temperature scaling factor can be applied to calibrate meta-learning methods, including MAML, Proto Net, and Matching Net. The reliability diagrams in Fig. 3 indicate that the temperature scaling factor not only calibrates classification uncertainty of transfer learning approaches, such as the proposed MFRL, but also makes the classification uncertainty well-calibrated in episodic meta-learning methods. Therefore, the temperature scaling factor can be applied to a wide range of few-shot classification methods to get well-calibrated uncertainty, while preserving the classification accuracy.
6 DISCUSSION
SWA has been applied to supervised learning of deep neural networks (Izmailov et al., 2018; Athiwaratkun et al., 2019) and its effectiveness was attributed to convergence to a solution on the flat side of an asymmetric loss valley (He et al., 2019). However, it does not explain the effectiveness of SWA in few-shot learning because the meta-training and meta-testing losses are not comparable after the top layer is retrained by the few-shot support data in a meta-test task. The effectiveness of SWA in few-shot learning must be related to the property of the representation. Although our results empirically demonstrate that SWA results in low-rank representation, further research about their connection is needed.
Explicit regularizers can also be used to obtain simple input-output functions in deep neural networks and low-rank representation, including L1 regularization, nuclear norm, spectral norm, and Frobenius norm (Bartlett et al., 2017; Neyshabur et al., 2018; Sanyal et al., 2020). However, some of those explicit regularizers are not compatible with standard SGD training or are computationally expensive. In addition, it is difficult to choose the appropriate strength of explicit regularization. Too strong explicit regularization can bias towards simple solutions that do not fit the data. In comparison, SWA is an implicit regularizer that is completely compatible with the standard SGD training without much extra computational cost. Thus, it can be easily combined with transfer learning and meta-learning to obtain more accurate few-shot learning models. In parallel, SWA is also robust to the choice of the hyperparameters - the learning rate and training epochs in the SWA stage (see details in Appendix A.4).
7 CONCLUSIONS
In this article, we propose MFRL to obtain accurate and reliable few-shot learning models. SWA is an implicit regularizer towards low-rank representation, which generalizes well to unseen meta-test tasks. The proposed method can be applied to both classification and regression tasks. Extensive experiments show that our method not only outperforms other SOTA methods on various datasets but also correctly quantifies the uncertainty in prediction.
A APPENDIX
A.1 PSEUDO CODE FOR MFRL
Algorithm 1 Meta-free representation learning for few-shot learning
Merge all training tasks Dtr = {Dτ}Tτ=1 Initialize model parameters θ = [θf ,W] Maximize the likelihood on all training data p (Dtr | θ) using SGD
Minimize the squared loss for regression problems Minimize the cross-entropy loss for classification problems
Run SWA to obtain θSWA Discard W and freeze θf Learn a new top layer using support data D in a test task:
Learn a hierarchical Bayesian linear model for a regression task Learn a logistic regression model with the temperature scaling factor for a classification task
A.2 EXPERIMENT DETAILS
Sine waves are generated by y = A sin(x−ϕ)+ , where amplitudeA ∈ [0.1, 5.0], phase ϕ ∈ [0, π] and is white noise with standard deviation of 0.1 (Finn et al., 2017). Each sine wave contains 200 samples by sampling x uniformly from [−5.0, 5.0]. We generate 500 waves for training, validation and testing, respectively. All sine waves are different from each other. We use the same backbone network described in MAML (Finn et al., 2017): a two-layer MLP with 40 hidden units in each layer. We use the SGD optimizer with a learning rate of 10−3 over 8 × 104 training iterations and run SWA over 2× 104 training iterations with a learning rate of 0.05.
Head pose regression data is derived from the Queen Mary University of London multi-view face dataset (Gong et al., 1996). It contains images from 37 people and 133 facial images per person. Facial images cover a view sphere of 90◦ in yaw and 120◦ in tilt. The dataset is divided into 3192 training samples (24 people), 1064 validation samples (8 people), and 665 test samples (5 people). We use the same feature extractor described in literature (Patacchiola et al., 2020): a three-layer convolutional neural network, each with 36 output channels, stride 2, and dilation 2. We train the model on the training people set for 300 epochs using the SGD optimizer with a learning rate of 0.01 and run 25 epochs of SWA with a learning rate of 0.01.
miniImageNet is a 100-class subset of the original ImageNet dataset (Deng et al., 2009) for fewshot learning (Vinyals et al., 2016). Each class contains 600 images in RGB format of the size 84 × 84. miniImageNet is split into 64 training classes, 16 validation classes, and 20 testing classes, following the widely used data splitting protocol (Ravi & Larochelle, 2017).
tieredImageNet is another subset of the ImageNet dataset for few-shot learning (Ren et al., 2018). It contains 608 classes grouped into 34 categories, which are split into 20 training categories (351 classes), 6 validation categories (97 classes), and 8 testing categories (160 classes). Compared with miniImageNet, training classes in tieredImageNet are sufficiently distinct from test classes, making few-shot classification more difficult.
CIFAR-FS is a derivative of the original CIFAR-100 dataset by randomly splitting 100 classes into 64, 16, and 20 classes for training, validation, and testing, respectively (Bertinetto et al., 2019).
FC100 is another derivative of CIFAR-100 with minimized overlapped information between train classes and test classes by grouping the 100 classes into 20 superclasses (Oreshkin et al., 2018). They are further split into 60 training classes (12 superclasses), 20 validation classes (4 superclasses), and 20 test classes (4 superclasses).
miniImageNet to CUB is a cross-domain few-shot classification task, where the models are trained on miniImageNet and tested on CUB (Welinder et al., 2010). Cross-domain few-shot classification is more challenging due to the big domain gap between two datasets. We can better evaluate the generalization capability in different algorithms. We follow the experiment setup in Yue et al. (2020) and use the WRN2-28-10 as the backbone.
The backbone model is trained on all training classes using C-class cross-entropy loss by the SGD optimizer (momentum of 0.9 and weight decay of 1e-4) with a mini-batch size of 64. The learning rate is initialized as 0.05 and is decayed by 0.1 after 60, 80, and 90 epochs (100 epochs in total). After the SGD training converges, we run 100 epochs of SWA with a learning rate of 0.02. Note that MFRL is not sensitive to training epochs and learning rates in SWA (see Appendix A.4). The training images are augmented with random crop, random horizontal flip, and color jitter.
During testing, we conduct 5 independent runs of 600 randomly sampled few-shot classification tasks from test classes and calculate the average accuracy. Each task contains 5 classes, 1 × 5 or 5× 5 support samples, and 75 query samples. A logistic regression model is learned using only the support samples. The classification accuracy is evaluated on the query samples.
A.3 ADDITIONAL RESULTS ON FEW-SHOT REGRESSION AND CLASSIFICATION
The additional results on few-shot regression using different activation functions are reported in Table 6. MFRL achieves high accuracy with different activation functions.
The few-shot classification results using a 4-layer convolutional neural network (or similar architectures) are reported in Table 7 and 8. Similar to the results using ResNet-12 and WRN-28-10, the proposed method outperforms a wide range of meta-learning approaches. Our method is only
second to few-shot embedding adaptation with transformer (FEAT) (Ye et al., 2020) on miniImageNet dataset. Recently, meta-learned attention modules are built on top of the convolutional neural network to get improved few-shot classification accuracy. Direct comparison to those methods with attention modules (Ye et al., 2020; Fei et al., 2021; Zhang et al., 2021) may not be fair because recent studies show that transformer itself can achieve better results than convolutional neural networks in image classification (Dosovitskiy et al., 2021). It is difficult to determine whether the performance improvement is due to the meta-learning algorithm or the attention modules. To make a fair comparison, we add convolutional block attention modules (Woo et al., 2018) on top of ResNet12 features (before global average pooling). As shown in Fig. 9, MFRL with attention modules achieves comparable results with MELR and IEPT.
The uncertainty calibration results of MFRL with the temperature scaling factor are presented in Fig. 5. The prediction confidence aligns well with the prediction accuracy. It demonstrates that MFRL with the temperature scaling factor results in well calibrated models.
A.4 SENSITIVITY OF MFRL
The performance of MFRL is not sensitive to learning rates in SWA. As shown in Fig. 6, the representation learned by SWA generalizes better than the one from standard SGD, as long as the learning rate in SWA is in a reasonable range. In addition, the prediction accuracy on meta-test tasks keeps stable even after running SWA for many epochs on the training data. Therefore, MFRL is not sensitive to training epochs. This desirable property makes the proposed method easy to use when solving few-shot learning problems in practice.
A.5 COMPARISON WITH EXPONENTIAL MOVING AVERAGING
Exponential moving average (EMA) decays the importance of model weights from early training epochs exponentially. Let θavg ← aθavg + (1− a) θnew. We try EMA with different values of a. In Table 10, EMA improves the performance when a is within a reasonable range. Note that EMA introduces one extra hyperparameter, the forgetting factor. It makes EMA less desirable in practice.
A.6 HIERARCHICAL BAYESIAN LINEAR CLASSIFICATION MODEL
Similar to the hierarchical Bayesian linear regression model, the prior distribution over w is p(w | λ) = ∏p i=0N (wi | 0, λ), where λ is the precision in the Gaussian prior. The hyperprior on λ is defined as p(λ) = Gamma (λ | a, b). The posterior over all latent variables given the data is
p (w, λ | X,y) = p (y | X,w, λ) p (w | λ) p(λ) p(y | X)
(7)
MCMC sampling (Hoffman & Gelman, 2014) is used to avoid potential deterioration in predictive performance due to approximated inference. A flat and non-informative hyperprior (a = b = 10−6) is used because no prior knowledge is available. In Table 11, the hierarchical Bayesian linear classification model achieves slightly worse performance than the logistic regression model. However, the classification model is not well calibrated, as shown in Fig. 7.
After fine-tuning a and b using the meta-validation data, it is possible to get better calibrated classification models on test tasks. Besides, the classification accuracy is still slightly worse than logistic regression after hyperparameter tuning. Our observations align with a recent study, which shows that the Bayesian classification model cannot achieve similar performance to the non-Bayesian counterpart without tempering the posterior (Wenzel et al., 2020). We do not further experiment tempered posterior in the hierarchical Bayesian linear classification model because it introduces an extra temperature hyperparameter that requires tuning. The original purpose of introducing the hierarchical Bayesian model is to get an accurate and well calibrated classification model without hyperparameter tuning. Consequently, the hierarchical Bayesian model is not used in few-shot classification in that hierarchical Bayesian linear classification models cannot achieve high accuracy and good uncertainty calibration from a non-informative hyperprior. If hyperparameter tuning is inevitable, it is much easier to tune a logistic regression model with a temperature scaling factor, compared with tuning a hierarchical Bayesian model. Furthermore, the computational cost of learning a hierarchical Bayesian linear classification model via MCMC sampling is much larger than that of learning a logistic regression model. | 1. What is the main contribution of the paper regarding few-shot learning?
2. What are the strengths and weaknesses of the proposed method compared to other recent works?
3. How does the reviewer assess the significance of using stochastic weight averaging (SWA) for few-shot learning?
4. What are the concerns regarding the theoretical link between low-rank representation and SWA?
5. How would the proposed method perform when the feature difference between base classes and few-shot classes is significant?
6. Would it be beneficial to include an ablation study comparing Exponential Moving Average (EMA) and SWA in few-shot learning?
7. Should the authors compare their method with Baseline++ that employs SWA in pre-training? | Summary Of The Paper
Review | Summary Of The Paper
This paper presents a few-shot learning method based on the reprehensive pre-train learning using stochastic weight averaging (SWA). The merits of the proposed work is that this proposed method can works for both few-shot regression and classification problems and both of them achieves better results compared with the other recent works as reported in the manuscript. And as claimed by the author, this is the first few-shot learning work can works for classification and regression.
Review
This paper presents a few-shot learning method based on the reprehensive pre-train learning using stochastic weight averaging (SWA). The merits of the proposed work is that this proposed method can works for both few-shot regression and classification problems and both of them achieves better results compared with the other recent works as reported in the manuscript. And as claimed by the author, this is the first few-shot learning work can works for classification and regression.
1), It is well-known that SWA is an effective tool in weight space to obtain better reprehensive learning as it leads to a more flat minimum. As a result, it is not very supervising that SWA achieves better results in transfer learning for few-learning learning. Actually, SWA improves most applications as the author mentioned for meta-based approaches. Therefore, employing SWA for few-shot learning might not be a contribution that significant enough for an individual work. It will be more interesting to provide more analyse why SWA lead to the better performance compared with other solutions. Moreover, the flat loss surface contribute to the adaption is well discussed in some existing incremental few-shot learning research.
2), It is interesting that the authors use the theory for meta learning to guide the research for transfer learning based few-shot learning, i.e., to make the feature with lower-rank representation. The result shows that the performance is improved. However, the theoretical link is still not clear. Is the low-rank representation criterion general for all reprehensive learning to have better result? Or the SWA is best way to achieve the low-rank representation for few-shot learning? It might be suspected that all those learning methods resulting better performance will contribute to the low-rank representation, e.g., various pre-training losses, network structure, etc.
3), For the transfer learning based few-shot learning method, there will be a problem that by freezing the feature extraction, it is almost impossible to further adapt and generalize to cross-domain test environment if the feature difference between base classes and few-shot classes are significant. As a result, the evaluation performance will be very related to the dataset whether the few-shot/pretrain data are similar or not. In contrast, methods like MAML will adapt and finetune the feature extraction for the novel classes, which leads it may more difficult to train. However, its generalization capability will be stronger when the feature of few-shot classes are different with the pretrain base classes. As a result, to prove the effectiveness of the proposed methods, more ablation should be added with different setting, e.g., the few-shot classes close to the pretrain base classes, few-shot classes far away to the pretrain base classes, etc. In this way, the reader will be able to see the comparative advantages and limitations of the proposed method. Otherwise, it might not be proper to directly conclude the proposed methods is better than the existing methods.
4), Similar like SWA, Exponential Moving Average (EMA) has the similar better results than the directly trained weights, and both are approaches in the weight space. It will be reasonable and interesting to be an ablation study to see if EMA achieves the similar results in few-shot learning or if EMA also contributes to the low-rank representation.
5), Similarly, for other transfer learning based few-shot learning method such as Baseline++, it is straightforward and fully compatible to employ the SWA in its pertrain, the comparison should be analysed if MFRL better than (Baseline++ with SWA) to demonstrate the effectiveness of the MFRL. |
ICLR | Title
Meta-free few-shot learning via representation learning with weight averaging
Abstract
Recent studies on few-shot classification using transfer learning pose challenges to the effectiveness and efficiency of episodic meta-learning algorithms. Transfer learning approaches are a natural alternative, but they are restricted to few-shot classification. Moreover, little attention has been on the development of probabilistic models with well-calibrated uncertainty from few-shot samples, except for some Bayesian episodic learning algorithms. To tackle the aforementioned issues, we propose a new transfer learning method to obtain accurate and reliable models for few-shot regression and classification. The resulting method does not require episodic meta-learning and is called meta-free representation learning (MFRL). MFRL first finds low-rank representation generalizing well on meta-test tasks. Given the learned representation, probabilistic linear models are fine-tuned with few-shot samples to obtain models with well-calibrated uncertainty. The proposed method not only achieves the highest accuracy on a wide range of few-shot learning benchmark datasets but also correctly quantifies the prediction uncertainty. In addition, weight averaging and temperature scaling are effective in improving the accuracy and reliability of few-shot learning in existing meta-learning algorithms with a wide range of learning paradigms and model architectures.
N/A
Recent studies on few-shot classification using transfer learning pose challenges to the effectiveness and efficiency of episodic meta-learning algorithms. Transfer learning approaches are a natural alternative, but they are restricted to few-shot classification. Moreover, little attention has been on the development of probabilistic models with well-calibrated uncertainty from few-shot samples, except for some Bayesian episodic learning algorithms. To tackle the aforementioned issues, we propose a new transfer learning method to obtain accurate and reliable models for few-shot regression and classification. The resulting method does not require episodic meta-learning and is called meta-free representation learning (MFRL). MFRL first finds low-rank representation generalizing well on meta-test tasks. Given the learned representation, probabilistic linear models are fine-tuned with few-shot samples to obtain models with well-calibrated uncertainty. The proposed method not only achieves the highest accuracy on a wide range of few-shot learning benchmark datasets but also correctly quantifies the prediction uncertainty. In addition, weight averaging and temperature scaling are effective in improving the accuracy and reliability of few-shot learning in existing meta-learning algorithms with a wide range of learning paradigms and model architectures.
1 INTRODUCTION
Currently, the vast majority of few-shot learning methods are within the general paradigm of metalearning (a.k.a. learning to learn) (Bengio et al., 1991; Schmidhuber, 1987; Thrun & Pratt, 1998), which learns multiple tasks in an episodic manner to distill transferrable knowledge (Vinyals et al., 2016; Finn et al., 2017; Snell et al., 2017). Although many episodic meta-learning methods report state-of-the-art (SOTA) performance, recent studies show that simple transfer learning methods with fixed embeddings (Chen et al., 2019; Tian et al., 2020) can achieve similar or better performance in few-shot learning. It is found that the effectiveness of optimization-based meta-learning algorithms is due to reusing high-quality representation, instead of rapid learning of task-specific representation (Raghu et al., 2020). The quality of the presentation is not quantitatively defined, except for some empirical case studies (Goldblum et al., 2020). Recent machine learning theories (Saunshi et al., 2021) indicate that low-rank representation leads to better sample efficiency in learning a new task. However, those theoretical studies are within the paradigm of meta-learning and do not reveal how to obtain low-rank representation for few-shot learning outside the realm of meta-learning. This motivates us to investigate ways to improve the representation for adapting to new few-shot tasks in a meta-free manner by taking the advantage of simplicity and robustness in transfer learning.
In parallel, existing transfer learning methods also have limitations. That is, the existing transfer learning methods may not find representation generalizing well to unseen few-shot tasks (Chen et al., 2019; Dhillon et al., 2020) , compared with state-of-the-art meta-learning methods (Ye et al., 2020; Zhang et al., 2020). Although some transfer learning methods utilize knowledge distillation and self-supervised training to achieve strong performance in few-shot classification, they are restricted to few-shot classification problems (Mangla et al., 2020; Tian et al., 2020). To the best of our knowledge, no transfer learning method is developed to achieve similar performance to metalearning in few-shot regression. As such, it is desirable to have a transfer learning method that finds high-quality representation generalizing well to unseen classification and regression problems.
The last limitation of the existing transfer learning methods in few-shot learning is the lack of uncertainty calibration. Uncertainty quantification is concerned with the quantification of how likely certain outcomes are. Despite a plethora of few-shot learning methods (in fact, machine learning in general) to improve the point estimation accuracy, few methods are developed to get probabilistic models with improved uncertainty calibration by integrating Bayesian learning into episodic metatraining (Grant et al., 2018; Finn et al., 2018; Yoon et al., 2018; Snell & Zemel, 2021). Few-shot learning models can be used in risk-averse applications such as medical diagnosis (Prabhu et al., 2019). The diagnosis decision is made on not only point estimation but also probabilities associated with the prediction. The risk of making wrong decisions is significant when using uncalibrated models (Begoli et al., 2019). Thus, the development of proper fine-tuning steps to achieve well-calibrated models is the key towards practical applications of transfer learning in few-shot learning.
In this paper, we develop a simple transfer learning method as our own baseline to allow easy regularization towards more generalizable representation and calibration of prediction uncertainty. The regularization in the proposed transfer learning method works for regression and classification problems so that we can handle both problems within a common architecture. The calibration procedure is easily integrated into the developed transfer learning method to obtain few-shot learning models with good uncertainty quantification. Therefore, the resulting method, called Meta-Free Representation Learning (MFRL), overcomes the aforementioned limitations in existing transfer learning methods for few-shot learning. Our empirical studies demonstrate that the relatively overlooked transfer learning method can achieve high accuracy and well-calibrated uncertainty in few-shot learning when it is combined with the proper regularization and calibration. Those two tools are also portable to meta-learning methods to improve accuracy and calibration, but the improvement is less significant compared with that of transfer learning.
We use stochastic weight averaging (SWA) (Izmailov et al., 2018), which is agnostic to loss function types, as implicit regularization to improve the generalization capability of the representation. We also shed light on that the effectiveness of SWA is due to its bias towards low-rank representation. To address the issue of uncertainty quantification, we fine-tune appropriate linear layers during the meta-test phase to get models with well-calibrated uncertainty. In MFRL, hierarchical Bayesian linear models are used to properly capture the uncertainty from very limited training samples in few-shot regression, whereas the softmax output is scaled with a temperature parameter to make the few-shot classification model well-calibrated. Our method is the first one to achieve well-calibrated few-shot models by only fine-tuning probabilistic linear models in the meta-test phase, without any learning mechanisms related to the meta-training or representation learning phase.
Our contributions in this work are summarized as follows:
• We propose a transfer learning method that can handle both few-shot regression and classification problems with performance exceeding SOTA.
• For the first time, we empirically find the implicit regularization of SWA towards low-rank representation, which is a useful property in transferring to few-shot tasks.
• The proposed method results in well-calibrated uncertainty in few-shot learning models while preserving SOTA accuracy.
• The implicit regularization of SWA and temperature scaling factor can be applied to existing meta-learning methods to improve their accuracy and reliability in few-shot learning.
2 RELATED WORK
Episodic meta-learning approaches can be categorized into metric-based and optimization-based methods. Metric-based methods project input data to feature vectors through nonlinear embeddings and compare their similarity to make the prediction. Examples of similarity metrics include the weighted L1 metric (Koch et al., 2015), cosine similarity (Qi et al., 2018; Vinyals et al., 2016), and Euclidean distance to class-mean representation (Snell et al., 2017). Instead of relying on predefined metrics, learnable similarity metrics are introduced to improve the few-shot classification performance (Oreshkin et al., 2018; Sung et al., 2018). Recent metric-based approaches focus on developing task-adaptive embeddings to improve few-shot classification accuracy. Those task-adaptive embeddings include attention mechanisms for feature transformation (Fei et al., 2021; Gidaris & Komodakis, 2018; Ye et al., 2020; Zhang et al., 2021), graph neural networks (Garcia & Estrach,
2018), implicit class representation (Ravichandran et al., 2019), and task-dependent conditioning (Oreshkin et al., 2018; Yoon et al., 2020; 2019). Although metric-based approaches achieve strong performance in few-shot classification, they cannot be directly applied to regression problems.
Optimization-based meta-learning approaches try to find transferrable knowledge and adapt to new tasks quickly. An elegant and powerful meta-learning approach, termed model-agnostic metalearning (MAML), solves a bi-level optimization problem to find good initialization of model parameters (Finn et al., 2017). However, MAML has a variety of issues, such as sensitivity to neural network architectures, instability during training, arduous hyperparameter tuning, and high computational cost. On this basis, some follow-up methods have been developed to simplify, stabilize and improve the training process of MAML (Antoniou et al., 2018; Flennerhag et al., 2020; Lee & Choi, 2018; Nichol et al., 2018; Park & Oliva, 2019). In practice, it is very challenging to learn high-dimensional model parameters in a low-data regime. Latent embedding optimization (LEO) attempts to learn low-dimensional representation to generate high-dimensional model parameters (Rusu et al., 2019). Meanwhile, R2-D2 (Bertinetto et al., 2019) and MetaOptNet (Lee et al., 2019) reduce the dimensionality of trainable model parameters by freezing feature extraction layers during inner loop optimization. Note that the proposed method is fundamentally different from R2-D2 and MetaOptNet because our method requires neither episodic meta-learning nor bi-level optimization.
Transfer learning approaches first learn a feature extractor on all training data through standard supervised learning, and then fine-tune a linear predictor on top of the learned feature extractor in a new task (Chen et al., 2019). However, vanilla transfer learning methods for few-shot learning do not take extra steps to make the learned representation generalizing well to unseen meta-test tasks. Some approaches in this paradigm are developed to improve the quality of representation and boost the accuracy of few-shot classification, including cooperative ensembles (Dvornik et al., 2019), knowledge distillation (Tian et al., 2020), and auxiliary self-supervised learning (Mangla et al., 2020). Nevertheless, those transfer learning methods are restricted to few-shot classification. MFRL aims to find representation generalizing well from the perspective of low-rank representation learning, which is supported by recent theoretical studies (Saunshi et al., 2021). Furthermore, MFLR is the first transfer learning method that can handle both few-shot regression and classification problems and make predictions with well-calibrated uncertainty.
3 BACKGROUND
3.1 EPISODIC META-LEARNING
In episodic meta-learning, the meta-training data contains T episodes or tasks, where the τ th episode consists of dataDτ = {(xτ,j ,yτ,j)}Nτj=1 withNτ samples. Tasks and episodes are used interchangeably in the rest of the paper. Episodic meta-learning algorithms aim to find common model parameters θ which can be quickly adapted to task-specific parameters φτ (τ = 1, ..., T ). For example, MAML-type algorithms assume φτ is one or a few gradient steps away from θ (Finn et al., 2017; 2018; Grant et al., 2018; Yoon et al., 2018), while other meta-learning approaches assume that φτ and θ share the parameters in the feature extractor and only differ in the top layer (Bertinetto et al., 2019; Lee et al., 2019; Snell et al., 2017).
3.2 STOCHASTIC WEIGHT AVERAGING
The idea of stochastic weight averaging (SWA) along the trajectory of SGD goes back to Polyak–Ruppert averaging (Polyak & Juditsky, 1992). Theoretically, weight averaging results in faster convergence for linear models in supervised learning and reinforcement learning(Bach & Moulines, 2013; Lakshminarayanan & Szepesvari, 2018). In deep learning, we are more interested in tail stochastic weight averaging (Jain et al., 2018), which averages the weights after T training epochs. The averaged model parameters θSWA can be computed by running s additional training epochs using SGD
θSWA = 1
s T+s∑ i=T+1 θi, (1)
where θi denotes the model parameters at the end of the i-th epoch. SWA has been applied to supervised learning of deep neural neural networks to achieve higher test accuracy (Izmailov et al., 2018).
4 METHODOLOGY
The proposed method is a two-step learning algorithm: meta-free representation learning followed by fine-tuning. We employ SWA to make the learned representation low-rank and better generalize to meta-test data. Given a meta-test task, a new top layer is fine-tuned with few-shot samples to obtain probabilistic models with well-calibrated uncertainty. Note that MFRL can be used for both regression and classification depending on the loss function. The pseudocode of MFRL is presented in Appendix A.1.
4.1 REPRESENTATION LEARNING
Common representation can be learned via maximization of the likelihood of all training data with respect to θ rather than following episodic meta-learning. To do so, we group the data Dτ = {(xτ,j ,yτ,j)}Nτj=1 from all meta-training tasks into a single dataset Dtr. Given aggregated training dataDtr = {X,Y}, representation can be learned by maximizing the likelihood p (Dtr | θ) with respect to θ. Let θ = [θf ,W], where θf represents parameters in the feature extractor and W denotes the parameters in the top linear layer. The feature extractor h(x) ∈ Rp is a neural network parameterized by θf and outputs a feature vector of dimension p. The specific form of the loss function depends on whether the task is regression or classification and can be given as follows:
LRP (θ) = − log p (Dtr | θ) = { LMSE(θ), regression LCE(θ), classification
where
LMSE(θ) = 1
2N ′ T∑ τ=1 Nτ∑ j=1 ( yτ,j −w>τ h (xτ,j) )2 , (2)
LCE(θ) = − N ′∑ j=1 C∑ c=1 yj,c log exp(w>c h(xj))∑C c′=1 exp(w > c′h(xj))
(3)
For regression problems, the model learns T regression tasks (W = [w1, ...,wT ] ∈ R(p+1)×T ) simultaneously using the loss function LMSE given in Eq. 2, whereas the model learns a C-class classification model 1 (W = [w1, ...,wC ] ∈ R(p+1)×C) for classification problems using the loss function LCE in Eq. 3. The loss function - either Eq. 2 or 3 - can be minimized through standard stochastic gradient descent, where N ′ = ∑T τ=1Nτ is the total number of training samples.
Post-processing via SWA Minimizing the loss functions in Eq. 2 and 3 by SGD may not necessarily result in representation that generalizes well to few-shot learning tasks in the meta-test set. The last hidden layer of a modern deep neural network is high-dimensional and may contain spurious features that over-fit the meta-training data. Recent meta-learning theories indicate that better sample complexity in learning a new task can be achieved via low-rank representation, whose singular values decay faster (Saunshi et al., 2021). We aim to find low-rank representation Φ = h(X) without episodic meta-learning, which is equivalent to finding the conjugate kernel KC = ΦΦ> with fast decaying eigenvalues. To link the representation with the parameter space, we can linearize the neural network by the first-order Taylor expansion at θT and get the finite width neural tangent kernel (NTK) KNTK(X,X) = J(X)J(X)>, where J(X) = ∇θfθ(X) ∈ RN
′×|θ| is the Jacobian matrix, and KNTK is a composite kernel containing KC (Fan & Wang, 2020). The distributions of eigenvalues for KNTK and KC are empirically similar. Analyzing KNTK could shed light on the properties of KC. In parallel, KNTK shares the same eigenvalues of the Gauss-Newton matrix G = 1N ′ J(X) >J(X). For linearized networks with squared loss, the Gauss-Newton matrix G well
1C is the total number of classes in Dtr. Learning a C-class classification model solves all possible tasks in the meta-training dataset because each task Dτ only contains a subset of C classes.
approximates the Hessian matrix H when y is well-described by fθ(x) (Martens, 2020). This is the case when SGD converges to θT within a local minimum basin. A Hessian matrix with a lot of small eigenvalues corresponds to a flat minimum, where the loss function is less sensitive to the perturbation of model parameters (Keskar et al., 2017). It is known that averaging the weights after SGD convergence in a local minimum basin pushes θT towards the flat side of the loss valley (He et al., 2019). As a result, SWA could result in a faster decay of eigenvalues in the kernel matrix, and thus low-rank representation. Our conjecture about SWA as implicit regularization towards low-rank representation is empirically verified in Section 5.
4.2 FINE-TUNING
After representation learning is complete, W is discarded and θf is frozen in a new few-shot task. Given the learned representation, we train a new probabilistic top layer in a meta-test task using fewshot samples. The new top layer will be configured differently depending on whether the few-shot task is a regression or a classification problem.
In a few-shot regression task, we learn a new linear regression model y = w>h(x) + on a fixed feature extractor h(x) ∈ Rp with few-shot training data D = {(xi, yi)}ni=1, where w denotes the model parameters and is Gaussian noise with zero mean and variance σ2. To avoid interpolation on few-shot training data (n p), a Gaussian prior p(w | λ) = ∏p i=0N (wi | 0, λ) is placed over w, where λ is the precision in the Gaussian prior. However, it is difficult to obtain an appropriate value for λ in a few-shot regression task because no validation data is available in D. Hierarchical Bayesian linear models can be used to obtain optimal regularization strength and grounded uncertainty estimation using few-shot training data only. To complete the specification of the hierarchical Bayesian model, the hyperpriors on λ and σ2 are defined as p(λ) = Gamma (λ | a, b) and p(σ−2) = Gamma(σ−2 | c, d), respectively. The hyper-priors become very flat and non-informative when a, b, c and d are set to very small values. The posterior over all latent variables given the data is p ( w, λ, σ2 | X,y ) , where X = {x}ni=1 and y = {yi}ni=1. However, the
posterior distribution p ( w, λ, σ2 | X,y ) is intractable. The iterative optimization based approximate inference (Tipping, 2001) is chosen because it is highly efficient. The point estimation for λ and σ2 is obtained by maximizing the marginal likelihood function p ( y | X, λ, σ2 ) . The posterior
of model parameters p ( w | X,y, λ, σ2 ) is calculated using the estimated λ and σ2. Previous two steps are repeated alternately until convergence.
The predictive distribution for a new sample x∗ is
p ( y∗ | x∗,X,y, λ, σ2 ) = ∫ p ( y∗ | x∗,w, σ2 ) p ( w | X,y, λ, σ2 ) dw, (4)
which can be computed analytically because both distributions on the right hand side of Eq. 4 are Gaussian. Consequently, hierarchical Bayesian linear models avoids over-fitting on few-shot training data and quantifies predictive uncertainty.
In a few-shot classification task, a new logistic regression model is learned with the post-processed representation. A typical K-way n-shot classification task D = {(xi, yi)}nKi=1 consists of K classes (different from meta-training classes) and n training samples per class. Minimizing an un-regularized cross-entropy loss results in a significantly over-confident classification model because the norm of logistic regression model parameters W ∈ R(p+1)×K becomes very large when few-shot training samples can be perfectly separated in the setting nK p. A weighted L2 regularization term is added to the cross-entropy loss to mitigate the issue
L(W) = − nK∑ i=1 K∑ c=1 yi,c log exp(w>c h(xi))∑K c′=1 exp(w > c′h(xi)) + λ K∑ c=1 w>c wc, (5)
where λ is the regularization coefficient, which affects the prediction accuracy and uncertainty. It is difficult to select an appropriate value of λ in each of the meta-test tasks due to the lack of validation data in D. We instead treat λ as a global hyper-parameter so that the value of λ should be determined based on the accuracy on meta-validation data. Note that the selected λ with high validation accuracy does not necessarily lead to well calibrated classification models. As such, we introduce the temperature scaling factor (Guo et al., 2017) as another global hyper-parameter to
scale the softmax output. Given a test sample x∗, the predicted probability for class c becomes
pc = exp(w>c h(x∗)/T )∑K c′=1 exp(w > c′h(x∗)/T ) , (6)
where T is the temperature scaling factor. In practice, we select the L2 regularization coefficient λ and the temperature scaling factor T as follows. At first, we set T to 1, and do grid search on the meta-validation data to find the λ resulting in the highest meta-validation accuracy. However, finetuning λ does not ensure good calibration. It is the temperature scaling factor that ensures the good uncertainty calibration. Similarly, we do grid search of T on the meta-validation set, and choose the temperature scaling factor resulting in the lowest expected calibration error (Guo et al., 2017). Note that different values of T do not affect the classification accuracy because temperature scaling is accuracy preserving.
5 EXPERIMENTS
We follow the standard setup in few-shot learning literature. The model is trained on a meta-training dataset and hyper-parameters are selected based on the performance on a meta-validation dataset. The final performance of the model is evaluated on a meta-test dataset. The proposed method is applied to few-shot regression and classification problems and compared against a wide range of alternative methods.
5.1 FEW-SHOT REGRESSION RESULTS
Sine waves (Finn et al., 2017) and head pose estimation (Patacchiola et al., 2020) datasets are used to evaluate the performance of MFRL in few-shot regression. We use the same backbones in literature (Patacchiola et al., 2020) to make fair comparisons. Details of the few-shot regression experiments can be found in Appendix A.2.
The results for few-shot regression are summarized in Table 1. In the sine wave few-shot regression, MFRL outperforms all meta-learning methods, demonstrating that high-quality representation can be learned in supervised learning, without episodic meta-learning. Although DKT with a spectral mixture (SM) kernel achieves high accuracy, the good performance should be attributed to the strong inductive bias to periodic functions in the SM kernel (Wilson & Adams, 2013). Additional results for MFRL with different activation functions are reported in Appendix A.3. In the head pose estimation experiment, MFRL also achieves the best accuracy. In both few-shot regression problems, SWA results in improved accuracy, suggesting that SWA can improve the quality of features and facilitate the learning of downstream tasks. In Fig. 1, uncertainty is correctly estimated by the hierarchical Bayesian linear model with learned features using just 10 training samples.
5.2 FEW-SHOT CLASSIFICATION RESULTS
We conduct few-shot classification experiments on four widely used few-shot image recognition benchmarks: miniImageNet (Ravi & Larochelle, 2017), tieredImageNet (Ren et al., 2018), CIFARFS (Bertinetto et al., 2019), and FC100 (Oreshkin et al., 2018). In addition, we test our approach on a cross-domain few-shot classification task from the miniImageNet to CUB. The experiment details about the few-shot classification datasets can be found in Appendix A.2. The proposed method is applied to three widely used network architectures: ResNet-12 (Lee et al., 2019; Ravichandran et al., 2019), wide ResNet (WRN-28-10) (Dhillon et al., 2020; Rusu et al., 2019), and a 4-layer convolutional neural network with 64 channels (Chen et al., 2019; Patacchiola et al., 2020) (in Appendix A.3).
The results of the proposed method and previous SOTA methods using similar backbones are reported in Table 2 and 3. The proposed method achieves the best performance in most of the experiments when compared with previous SOTA methods. Our method is closely related to Baseline++ (Chen et al., 2019) and fine-tuning on logits (Dhillon et al., 2020). Baseline++ normalizes both classification weights and features, while the proposed method only normalizes features. It allows our method to find a more accurate model in a more flexible hypothesis space, given high-quality representation. Compared with fine-tuning on logits, our method obtains better results by learning
a new logistic regression model on features, which store richer information about the data. Some approaches pretrain a C-class classification model on all training data and then apply highly sophisticated meta-learning techniques to the pretrained model to achieve SOTA performance (Rusu et al., 2019; Sun et al., 2019). Our approach with SWA outperforms those pretrained-then-meta-learned models, which demonstrates that SWA obtains high-quality representation that generalizes well to unseen tasks. Compared with improving representation quality for few-shot classification via selfdistillation (Tian et al., 2020), the computational cost of SWA is significantly smaller because it does not require training models from scratch multiple times. Moreover, SWA can be applied to find good representation for both few-shot regression and classification, while previous transfer learning approaches can only handle few-shot classification problems (Mangla et al., 2020; Tian et al., 2020).
MFRL is also applied to the cross-domain few-shot classification task as summarized in Table 4. MFRL outperforms other methods in this challenging task, indicating that the learned representation has strong generalization capability. We use the same hyperparameters (training epochs, learning rate, learning rate in SWA, SWA epoch, etc.) as in Table 2. The strong results indicate that MFRL is robust to hyperparameter choice. Surprisingly, meta-learning methods with adaptive embeddings do not outperform simple transfer learning methods like Baseline++ when the domain gap between base classes and novel classes is large. We notice that Tian et al. (2020) also reports similar results that transfer learning methods show superior performance on a large-scale cross-domain few-shot classification dataset. We still believe that adaptive embeddings should be helpful when the domain gap between base and novel classes is large. Nevertheless, how to properly train a model to obtain useful adaptive embeddings in novel tasks is an open question.
5.3 EFFECTIVE RANK OF THE REPRESENTATION
The rank of representation defines the number of independent bases. For deep learning, noise in gradients and numerical imprecision can cause the resulting matrix to be full-rank. Therefore, simply counting the number of non-zero singular values may not be an effective way to measure the rank of the representation. To compare the effective ranks, we plot the normalized singular values of the representation of meta-test data in Fig. 2, where the representation with SWA has a faster decay in singular values, thus indicating the lower effective rank of the presentation with SWA. The results empirically verify our conjecture that SWA is an implicit regularizer towards low-rank representation.
0 100 200 300 400 500 600 Singular value index i
0.0
0.2
0.4
0.6
0.8
1.0
No rm
al ize
d sin
gu la
r v al
ue
i/ m
ax
miniImageNet w.o. SWA: ilog i = 68.49 SWA: ilog i = 58.49
0 100 200 300 400 500 600 Singular value index i
0.0
0.2
0.4
0.6
0.8
1.0
No rm
al ize
d sin
gu la
r v al
ue
i/ m
ax
tieredImageNet w.o. SWA: ilog i = 83.98 SWA: ilog i = 80.01
0 100 200 300 400 500 600 Singular value index i
0.0
0.2
0.4
0.6
0.8
1.0
No rm
al ize
d sin
gu la
r v al
ue
i/ m
ax
CIFAR-FS
w.o. SWA: ilog i = 40.94 SWA: ilog i = 33.63
0 100 200 300 400 500 600 Singular value index i
0.0
0.2
0.4
0.6
0.8
1.0
No rm
al ize
d sin
gu la
r v al
ue
i/ m
ax
FC100
w.o. SWA: ilog i = 40.22 SWA: ilog i = 34.34
Figure 2: Normalized singular values for representation with and without SWA. The metric
−
∑
σ̄i log σ̄i is used to measure the effective rank of the representation, where σ̄i = σi/σmax. Faster decay in singular values indicates that fewer dimensions capture the most variation in all dimensions, thus lower effective rank.
5.4 FEW-SHOT CLASSIFICATION RELIABILITY
The proposed method not only achieves high accuracy in few-shot classification but also makes the classification uncertainty well-calibrated. A reliability diagram can be used to check model calibration visually, which plots an identity function between prediction accuracy and confidence when the model is perfectly calibrated (DeGroot & Fienberg, 1983). Fig. 3 shows the classification reliability diagrams along with widely used metrics for uncertainty calibration, including expected calibration error (ECE) (Guo et al., 2017), maximum calibration error (MCE) (Naeini et al., 2015), and Brier score (BRI) (Brier, 1950). ECE measures the average binned difference between confidence and accuracy, while MCE measures the maximum difference. BRI is the squared error between the predicted probabilities and one-hot labels. MAML is over-confident because tuning a deep neural network on few-shot data is prone to over-fitting. Meanwhile, Proto Net and Matching Net are better calibrated than MAML because they do not fine-tune the entire network during testing. Nevertheless, they are still slightly over-confident. The results indicate that MFRL with a global temperature scaling factor can learn well-calibrated models from very limited training samples.
5.5 APPLICATION IN META-LEARNING
Meanwhile, we also apply SWA to episodic meta-learning methods, such Proto Net, MAML and Matching Net, to improve their classification accuracy. The results in Table 5 indicate that SWA can improve the few-shot classification accuracy in both transfer learning and episodic meta-learning. SWA is orthogonal to the learning paradigm and model architecture. Thus, SWA can be applied to a wide range of few-shot learning methods to improve accuracy.
Furthermore, the temperature scaling factor can be applied to calibrate meta-learning methods, including MAML, Proto Net, and Matching Net. The reliability diagrams in Fig. 3 indicate that the temperature scaling factor not only calibrates classification uncertainty of transfer learning approaches, such as the proposed MFRL, but also makes the classification uncertainty well-calibrated in episodic meta-learning methods. Therefore, the temperature scaling factor can be applied to a wide range of few-shot classification methods to get well-calibrated uncertainty, while preserving the classification accuracy.
6 DISCUSSION
SWA has been applied to supervised learning of deep neural networks (Izmailov et al., 2018; Athiwaratkun et al., 2019) and its effectiveness was attributed to convergence to a solution on the flat side of an asymmetric loss valley (He et al., 2019). However, it does not explain the effectiveness of SWA in few-shot learning because the meta-training and meta-testing losses are not comparable after the top layer is retrained by the few-shot support data in a meta-test task. The effectiveness of SWA in few-shot learning must be related to the property of the representation. Although our results empirically demonstrate that SWA results in low-rank representation, further research about their connection is needed.
Explicit regularizers can also be used to obtain simple input-output functions in deep neural networks and low-rank representation, including L1 regularization, nuclear norm, spectral norm, and Frobenius norm (Bartlett et al., 2017; Neyshabur et al., 2018; Sanyal et al., 2020). However, some of those explicit regularizers are not compatible with standard SGD training or are computationally expensive. In addition, it is difficult to choose the appropriate strength of explicit regularization. Too strong explicit regularization can bias towards simple solutions that do not fit the data. In comparison, SWA is an implicit regularizer that is completely compatible with the standard SGD training without much extra computational cost. Thus, it can be easily combined with transfer learning and meta-learning to obtain more accurate few-shot learning models. In parallel, SWA is also robust to the choice of the hyperparameters - the learning rate and training epochs in the SWA stage (see details in Appendix A.4).
7 CONCLUSIONS
In this article, we propose MFRL to obtain accurate and reliable few-shot learning models. SWA is an implicit regularizer towards low-rank representation, which generalizes well to unseen meta-test tasks. The proposed method can be applied to both classification and regression tasks. Extensive experiments show that our method not only outperforms other SOTA methods on various datasets but also correctly quantifies the uncertainty in prediction.
A APPENDIX
A.1 PSEUDO CODE FOR MFRL
Algorithm 1 Meta-free representation learning for few-shot learning
Merge all training tasks Dtr = {Dτ}Tτ=1 Initialize model parameters θ = [θf ,W] Maximize the likelihood on all training data p (Dtr | θ) using SGD
Minimize the squared loss for regression problems Minimize the cross-entropy loss for classification problems
Run SWA to obtain θSWA Discard W and freeze θf Learn a new top layer using support data D in a test task:
Learn a hierarchical Bayesian linear model for a regression task Learn a logistic regression model with the temperature scaling factor for a classification task
A.2 EXPERIMENT DETAILS
Sine waves are generated by y = A sin(x−ϕ)+ , where amplitudeA ∈ [0.1, 5.0], phase ϕ ∈ [0, π] and is white noise with standard deviation of 0.1 (Finn et al., 2017). Each sine wave contains 200 samples by sampling x uniformly from [−5.0, 5.0]. We generate 500 waves for training, validation and testing, respectively. All sine waves are different from each other. We use the same backbone network described in MAML (Finn et al., 2017): a two-layer MLP with 40 hidden units in each layer. We use the SGD optimizer with a learning rate of 10−3 over 8 × 104 training iterations and run SWA over 2× 104 training iterations with a learning rate of 0.05.
Head pose regression data is derived from the Queen Mary University of London multi-view face dataset (Gong et al., 1996). It contains images from 37 people and 133 facial images per person. Facial images cover a view sphere of 90◦ in yaw and 120◦ in tilt. The dataset is divided into 3192 training samples (24 people), 1064 validation samples (8 people), and 665 test samples (5 people). We use the same feature extractor described in literature (Patacchiola et al., 2020): a three-layer convolutional neural network, each with 36 output channels, stride 2, and dilation 2. We train the model on the training people set for 300 epochs using the SGD optimizer with a learning rate of 0.01 and run 25 epochs of SWA with a learning rate of 0.01.
miniImageNet is a 100-class subset of the original ImageNet dataset (Deng et al., 2009) for fewshot learning (Vinyals et al., 2016). Each class contains 600 images in RGB format of the size 84 × 84. miniImageNet is split into 64 training classes, 16 validation classes, and 20 testing classes, following the widely used data splitting protocol (Ravi & Larochelle, 2017).
tieredImageNet is another subset of the ImageNet dataset for few-shot learning (Ren et al., 2018). It contains 608 classes grouped into 34 categories, which are split into 20 training categories (351 classes), 6 validation categories (97 classes), and 8 testing categories (160 classes). Compared with miniImageNet, training classes in tieredImageNet are sufficiently distinct from test classes, making few-shot classification more difficult.
CIFAR-FS is a derivative of the original CIFAR-100 dataset by randomly splitting 100 classes into 64, 16, and 20 classes for training, validation, and testing, respectively (Bertinetto et al., 2019).
FC100 is another derivative of CIFAR-100 with minimized overlapped information between train classes and test classes by grouping the 100 classes into 20 superclasses (Oreshkin et al., 2018). They are further split into 60 training classes (12 superclasses), 20 validation classes (4 superclasses), and 20 test classes (4 superclasses).
miniImageNet to CUB is a cross-domain few-shot classification task, where the models are trained on miniImageNet and tested on CUB (Welinder et al., 2010). Cross-domain few-shot classification is more challenging due to the big domain gap between two datasets. We can better evaluate the generalization capability in different algorithms. We follow the experiment setup in Yue et al. (2020) and use the WRN2-28-10 as the backbone.
The backbone model is trained on all training classes using C-class cross-entropy loss by the SGD optimizer (momentum of 0.9 and weight decay of 1e-4) with a mini-batch size of 64. The learning rate is initialized as 0.05 and is decayed by 0.1 after 60, 80, and 90 epochs (100 epochs in total). After the SGD training converges, we run 100 epochs of SWA with a learning rate of 0.02. Note that MFRL is not sensitive to training epochs and learning rates in SWA (see Appendix A.4). The training images are augmented with random crop, random horizontal flip, and color jitter.
During testing, we conduct 5 independent runs of 600 randomly sampled few-shot classification tasks from test classes and calculate the average accuracy. Each task contains 5 classes, 1 × 5 or 5× 5 support samples, and 75 query samples. A logistic regression model is learned using only the support samples. The classification accuracy is evaluated on the query samples.
A.3 ADDITIONAL RESULTS ON FEW-SHOT REGRESSION AND CLASSIFICATION
The additional results on few-shot regression using different activation functions are reported in Table 6. MFRL achieves high accuracy with different activation functions.
The few-shot classification results using a 4-layer convolutional neural network (or similar architectures) are reported in Table 7 and 8. Similar to the results using ResNet-12 and WRN-28-10, the proposed method outperforms a wide range of meta-learning approaches. Our method is only
second to few-shot embedding adaptation with transformer (FEAT) (Ye et al., 2020) on miniImageNet dataset. Recently, meta-learned attention modules are built on top of the convolutional neural network to get improved few-shot classification accuracy. Direct comparison to those methods with attention modules (Ye et al., 2020; Fei et al., 2021; Zhang et al., 2021) may not be fair because recent studies show that transformer itself can achieve better results than convolutional neural networks in image classification (Dosovitskiy et al., 2021). It is difficult to determine whether the performance improvement is due to the meta-learning algorithm or the attention modules. To make a fair comparison, we add convolutional block attention modules (Woo et al., 2018) on top of ResNet12 features (before global average pooling). As shown in Fig. 9, MFRL with attention modules achieves comparable results with MELR and IEPT.
The uncertainty calibration results of MFRL with the temperature scaling factor are presented in Fig. 5. The prediction confidence aligns well with the prediction accuracy. It demonstrates that MFRL with the temperature scaling factor results in well calibrated models.
A.4 SENSITIVITY OF MFRL
The performance of MFRL is not sensitive to learning rates in SWA. As shown in Fig. 6, the representation learned by SWA generalizes better than the one from standard SGD, as long as the learning rate in SWA is in a reasonable range. In addition, the prediction accuracy on meta-test tasks keeps stable even after running SWA for many epochs on the training data. Therefore, MFRL is not sensitive to training epochs. This desirable property makes the proposed method easy to use when solving few-shot learning problems in practice.
A.5 COMPARISON WITH EXPONENTIAL MOVING AVERAGING
Exponential moving average (EMA) decays the importance of model weights from early training epochs exponentially. Let θavg ← aθavg + (1− a) θnew. We try EMA with different values of a. In Table 10, EMA improves the performance when a is within a reasonable range. Note that EMA introduces one extra hyperparameter, the forgetting factor. It makes EMA less desirable in practice.
A.6 HIERARCHICAL BAYESIAN LINEAR CLASSIFICATION MODEL
Similar to the hierarchical Bayesian linear regression model, the prior distribution over w is p(w | λ) = ∏p i=0N (wi | 0, λ), where λ is the precision in the Gaussian prior. The hyperprior on λ is defined as p(λ) = Gamma (λ | a, b). The posterior over all latent variables given the data is
p (w, λ | X,y) = p (y | X,w, λ) p (w | λ) p(λ) p(y | X)
(7)
MCMC sampling (Hoffman & Gelman, 2014) is used to avoid potential deterioration in predictive performance due to approximated inference. A flat and non-informative hyperprior (a = b = 10−6) is used because no prior knowledge is available. In Table 11, the hierarchical Bayesian linear classification model achieves slightly worse performance than the logistic regression model. However, the classification model is not well calibrated, as shown in Fig. 7.
After fine-tuning a and b using the meta-validation data, it is possible to get better calibrated classification models on test tasks. Besides, the classification accuracy is still slightly worse than logistic regression after hyperparameter tuning. Our observations align with a recent study, which shows that the Bayesian classification model cannot achieve similar performance to the non-Bayesian counterpart without tempering the posterior (Wenzel et al., 2020). We do not further experiment tempered posterior in the hierarchical Bayesian linear classification model because it introduces an extra temperature hyperparameter that requires tuning. The original purpose of introducing the hierarchical Bayesian model is to get an accurate and well calibrated classification model without hyperparameter tuning. Consequently, the hierarchical Bayesian model is not used in few-shot classification in that hierarchical Bayesian linear classification models cannot achieve high accuracy and good uncertainty calibration from a non-informative hyperprior. If hyperparameter tuning is inevitable, it is much easier to tune a logistic regression model with a temperature scaling factor, compared with tuning a hierarchical Bayesian model. Furthermore, the computational cost of learning a hierarchical Bayesian linear classification model via MCMC sampling is much larger than that of learning a logistic regression model. | 1. What is the main contribution of the paper regarding few-shot classification and regression?
2. What are the strengths of the proposed approach, particularly in its application of stochastic weight averaging (SWA)?
3. What are the weaknesses of the paper, especially in terms of its presentation and characterization of the proposed method?
4. How does the author view the relationship between MFRL and SWA?
5. Can the authors provide more insights into the effectiveness of MFRL without SWA in regression tasks?
6. Are there any other aspects of MFRL that contribute positively to its performance in the classification setting?
7. How does the author justify the use of hyperparameter selection using meta-validation data, especially in cross-domain settings?
8. Can the authors elaborate on the difference between MFRL and Baseline++?
9. What are the key factors that ensure model calibration in the proposed approach, and how does the fine-tuning procedure guarantee calibration?
10. Are there any early-stopping techniques employed in the fine-tuning process to ensure calibration metrics? | Summary Of The Paper
Review | Summary Of The Paper
The submission introduces a few-shot classification and regression approach called Meta-Free Representation Learning (MFRL). First, a representation is learned on the meta-training data: for regression, the model is trained on all available training regression tasks concurrently; for classification, the model is trained on the full-ways classification problem using all training classes. Then, stochastic weight averaging (SWA) is applied to the model by continuing training for a certain number of epochs and averaging the parameters obtained across those additional epochs.
For test tasks, the regression or classification layer is discarded, and a new output layer is trained while freezing the network weights. For regression, approximate inference is performed on a hierarchical Bayesian linear model. For classification, a logistic regression model is trained with L2 regularization and a temperature hyperparameter.
Experimental results are presented on the sine wave and head pose problems for regression, and on mini-ImageNet, tiered-ImageNet, CIFAR-FS, and FC100 for classification. The extent to which SWA encourages learning lower-rank representations (as hypothesized) is verified through visualizations of the normalized eigenvalues. Finally, calibration curves are shown for classification to demonstrate how MFRL's temperature scaling factor leads to better calibrated models.
Review
The writing is easy to follow. The introduction provides a good motivation for the proposed approach and presents a thorough review of the existing literature. The idea of using SWA for few-shot classification is itself interesting and new as far as I can tell, and the empirical results look promising. The conjecture that it fosters lower-rank representations is supported by empirical observations.
However, after reading the paper, I'm left with the question: beyond the application of SWA to the few-shot learning setting, what is MFRL? One of the claimed contributions is that it "can handle both few-shot regression and classification", however different adaptation strategies are used for regression (hierarchical Bayesian linear model) and classification (L2-regularized logistic regression with a temperature coefficient). As far as I can tell, the aspects common to both settings are i) learning the representation on the training data using a single objective rather than episodic losses (which has already been explored in previous work), and ii) applying SWA. In fact, Algorithm 1 feels more like two algorithms combined with if-statements than a unique algorithm. I feel that the presentation would have been better and more straightforwardly characterized as an application of SWA to the few-shot learning setting, and I'd be interested in hearing the authors' opinion on this.
Additional questions/comments:
Can the authors say a few words on hyperparameter selection using meta-validation data? In the cross-domain setting (e.g. mini-ImageNet -> CUB, or Meta-Dataset), is this a good selection strategy?
I am surprised by how well MFRL without SWA performs for regression. Can the authors expand on what aspect of MFRL is responsible for its good performance? As I understand, it consists in training concurrently on all training regression tasks; how is it that it outperforms more sophisticated meta-learning approaches?
In the classification setting, MFRL only really stands out when SWA is applied. Are there other aspects to MFRL that positively impact performance?
The difference between MFRL and Baseline++ (one normalizes the features, and the other normalizes both the features and the classification weights) is easy to miss. Being such a small implementation difference, it also reinforces the idea that MFRL reduces to SWA, and naming it is not necessary.
Presenting results on model calibration is good, but the paper feels lacking in details. Judging by the introduction, using a temperature parameter is sufficient to obtain good model calibration. How come? The submission mentions that "given a meta-test task, a new top layer is fine-tuned with few-shot samples to obtain probabilistic models with well-calibrated uncertainty"; what about the fine-tuning procedure ensures calibration? Is some kind of early-stopping performed by looking at calibration metrics? |
ICLR | Title
Uncertainty in Neural Processes
Abstract
We explore the effects of architecture and training objective choice on amortized posterior predictive inference in probabilistic conditional generative models. We aim this work to be a counterpoint to a recent trend in the literature that stresses achieving good samples when the amount of conditioning data is large. We instead focus our attention on the case where the amount of conditioning data is small. We highlight specific architecture and objective choices that we find lead to qualitative and quantitative improvement to posterior inference in this low data regime. Specifically we explore the effects of choices of pooling operator and variational family on posterior quality in neural processes. Superior posterior predictive samples drawn from our novel neural process architectures are demonstrated via image completion/in-painting experiments.
1 INTRODUCTION
What makes a probabilistic conditional generative model good? The belief that a generative model is good if it produces samples that are indistinguishable from those that it was trained on (Hinton, 2007) is widely accepted, and understandably so. This belief also applies when the generator is conditional, though the standard becomes higher: conditional samples must be indistinguishable from training samples for each value of the condition.
Consider an amortized image in-painting task in which the objective is to fill in missing pixel values given a subset of observed pixel values. If the number and location of observed pixels is fixed, then a good conditional generative model should produce sharp-looking sample images, all of which should be compatible with the observed pixel values. If the number and location of observed pixels is allowed to vary, the same should remain true for each set of observed pixels. Recent work on this problem has focused on reconstructing an entire image from as small a conditioning set as possible. As shown in Fig. 1, state-of-the-art methods (Kim et al., 2018) achieve high-quality reconstruction from as few as 30 conditioning pixels in a 1024-pixel image.
Our work starts by questioning whether reconstructing an image from a small subset of pixels is always the right objective. To illustrate, consider the image completion task on handwritten digits. A small set of pixels might, depending on their locations, rule out the possibility that the full image is, say, 1, 5, or 6. Human-like performance in this case would generate sharp-looking sample images for all digits that are consistent with the observed pixels (i.e., 0, 2-4, and 7-9). Observing additional pixels will rule out successively more digits until the only remaining uncertainty pertains to stylistic details. The bottom-right panel of Fig. 1 demonstrates this type of “calibrated” uncertainty.
We argue that in addition to high-quality reconstruction based on large conditioning sets, amortized conditional inference methods should aim for meaningful, calibrated uncertainty, particularly for small conditioning sets. For different problems, this may mean different things (see discussion in Section 3). In this work, we focus on the image in-painting problem, and define well calibrated uncertainty to be a combination of two qualities: high sample diversity for small conditioning sets; and sharp-looking, realistic images for any size of conditioning set. As the size of the conditioning set grows, we expect the sample diversity to decrease and the quality of the images to increase. We note that this emphasis is different from the current trend in the literature, which has focused primarily on making sharp and accurate image completions when the size of the conditioning context is large (Kim et al., 2018).
To better understand and make progress toward our aim, we employ posterior predictive inference in a conditional generative latent-variable model, with a particular focus on neural processes (NPs)
(Garnelo et al., 2018a;b). We find that particular architecture choices can result in markedly different performance. In order to understand this, we investigate posterior uncertainty in NP models (Section 4), and we use our findings to establish new best practices for NP amortized inference artifacts with well-calibrated uncertainty. In particular, we demonstrate improvements arising from a combination of max pooling, a mixture variational distribution, and a “normal” amortized variational inference objective.
The rest of this paper is organized as follows. Section 2 and Section 3 present background material on amortized inference for generative models and calibrated uncertainty, respectively. Section 4 discusses and presents empirical evidence for how NP models handle uncertainty. Section 5 introduces our proposed network architecture and objective. Section 6 reports our results on the MNIST, FashionMNIST and CelebA datasets. Finally, Section 7 presents our conclusions.
2 AMORTIZED INFERENCE FOR CONDITIONAL GENERATIVE MODELS
Our work builds on amortized inference (Gershman & Goodman, 2014; Kingma & Welling, 2014), probabilistic meta-learning (Gordon et al., 2019), and conditional generative models in the form of neural processes (Garnelo et al., 2018b; Kim et al., 2018). This section provides background.
Let (xC ,yC) = {(xi, yi)}ni=1 and (xT ,yT ) = {(x′j , y′j)}mj=1 be a context set and target set respectively. In image in-painting, the context set input xC is a subset of an image’s pixel coordinates, the context set output yC are the corresponding pixel values (greyscale intensity or colors), the target set input xT is a set of pixel coordinates requiring in-painting, and the target set output yT is the corresponding set of target pixel values. The corresponding graphical model is shown in Fig. 2.
The goal of amortized conditional inference is to rapidly approximate, at “test time,” the posterior predictive distribution
pθ(yT |xT ,xC ,yC) = ∫ pθ(yT |xT , z)pθ(z|xC ,yC)dz . (1)
We can think of the latent variable z as representing a problem-specific task-encoding. The likelihood term pθ(yT |xT , z) shows that the encoding parameterizes a regression model linking the target inputs to the target outputs. In the NP perspective, z is a function and Eq. (1) can be seen as integrating over the regression function itself, as in Gaussian process regression (Rasmussen, 2003).
Variational inference There are two fundamental aims for amortized inference for conditional generative models: learning the model, parameterized by θ, that produces good samples, and producing an amortization artifact, parameterized by φ, that can be used to approximately solve Eq. (1) quickly at test time. Variational inference techniques couple the two learning problems. Let y and x be task-specific output and input sets, respectively, and assume that at training time we know the values of y. We can construct the usual single-training-task evidence lower bound (ELBO) as
log pθ(y|x) ≥ Ez∼qφ(z|x,y) [ log pθ(y|z,x)pθ(z)qφ(z|x,y) ] . (2)
Summing over all training examples and optimizing Eq. (2) with respect to φ learns an amortized inference artifact that takes a context set and returns a task embedding; optimizing with respect to θ learns a problem-specific generative model. Optimizing both simultaneously results in an amortized inference artifact bespoke to the overall problem domain.
At test time the learned model and inference artifacts can be combined to perform amortized posterior predictive inference, approximating Eq. (1) with
pθ(yT |xT ,xC ,yC) ≈ ∫ pθ(yT |xT , z)qφ(z|xC ,yC)dz . (3)
Crucially, given an input (xC ,yC), sampling from this distribution is as simple as sampling a task embedding z from qφ(z|xC ,yC) and then passing the sampled z to the generative model pθ(yT |xT , z) to produce samples from the conditional generative model. Meta-learning The task-specific problem becomes a meta-learning problem when learning a regression model θ that performs well on multiple tasks with the same graphical structure, trained on data for which the target outputs {y′j} are observed as well. In training our in-painting models, following conventions in the literature (Garnelo et al., 2018a;b), tasks are simply random-size subsets of random pixel locations x and values y from training set images. This random subsetting of training images into context and target sets transforms this into a meta-learning problem, and the “encoder” qφ(z|x,y) must learn to generalize over different context set sizes, with less posterior uncertainty as the context set size grows.
Neural processes Our work builds on neural processes (NPs) (Garnelo et al., 2018a;b). NPs are deep neural network conditional generative models. Multiple variants of NPs have been proposed (Garnelo et al., 2018a;b; Kim et al., 2018), and careful empirical comparisons between them appear in the literature (Grover et al., 2019; Le et al., 2018).
NPs employ an alternative training objective to Eq. (2) arising from the fact that the graphical model in Fig. 2 allows a Bayesian update on the distribution of z, conditioning on the entire context set to produce a posterior pθ(z|xC ,yC). If the generative model is in a tractable family that allows analytic updates of this kind, then the NP objective corresponds to maximizing
Ez∼qφ(z|xT ,yT ) [ log pθ(yT |z,xT )pθ(z|xC,yC)qφ(z|xT ,yT ) ] ≈ Ez∼qφ(z|xT ,yT ) [ log pθ(yT |z,xT )qφ(z|xC,yC) qφ(z|xT ,yT ) ] (4)
where replacing pθ(z|xC ,yC) with its variational approximation is typically necessary because most deep neural generative models have a computationally inaccessible posterior. This “NP objective”
can be trained end-to-end, optimizing for both φ and θ simultaneously, where the split of training data into context and target sets must vary in terms of context set size. The choice of optimizing Eq. (4) instead of Eq. (2) is largely empirical (Le et al., 2018).
3 CALIBRATED UNCERTAINTY
Quantifying and calibrating uncertainty in generative models remains an open problem, particularly in the context of amortized inference. Previous work on uncertainty calibration has focused on problems with relatively simpler structure. For example, in classification and regression problems with a single dataset, prior work framed the problem as predicting a cumulative distribution function that is close to the data-generating distribution, first as a model diagnostic (Gneiting et al., 2007) and subsequently as a post-hoc adjustment to a learned predictor (Kuleshov et al., 2018). A version of the latter approach was also applied to structured prediction problems (Kuleshov & Liang, 2015).
Previous approaches are conceptually similar to our working definition of calibrated uncertainty. However, we seek calibrated uncertainty on a per-image, per-conditioning set basis, which is fundamentally different from previous settings. Quantification of all aspects of generative model performance is an area of ongoing research, with uncertainty quantification a particularly challenging problem.
4 UNCERTAINTY IN NEURAL PROCESS MODELS
In this section, we investigate how NP models handle uncertainty. A striking property of NP models is that as the size of the (random) context set increases, there is less sampling variation in target samples generated by passing z ∼ qφ(z|xC ,yC) through the decoder. The samples shown in Fig. 1 are the likelihood mean (hence a deterministic function of z), and so the reduced sampling variation can only be produced by decreased posterior uncertainty. Our experiments confirm this, as shown in Fig. 3a: posterior uncertainty (as measured by entropy) decreases for increasing context size, even beyond the maximum training context size. Such posterior contraction is a well-studied property of classical Bayesian inference and is a consequence of the inductive bias of exchangeable models. However, NP models do not have the same inductive bias explicitly built in. How do trained NP models exhibit posterior contraction without being explicitly designed to do so? How do they learn to do so during training?
A simple hypothesis is that the network somehow transfers the context size through the pooling operation and into ρφ(sC), which uses that information to set the posterior uncertainty. That hypothesis is supported by Fig. 3b, which shows the results of training a classifier to infer the context size given
only sC . However, consider that within a randomly generated context set, some observations are more informative than others. For example, Fig. 4 shows the first {10, 50, 100} pixels of an MNIST digit 2, greedily chosen to minimizeDKL(qφ(z|x,y)||qφ(z|xC ,yC)). If z is interpreted to represent, amongst other things, which digit the image contains, then a small subset of pixels determine which digits are possible.
It is these highly informative pixels that drive posterior contraction in a trained NP. In a random context set, the number of highly informative pixels is random. For example, a max-pooled embedding saturates with the M most highly informative context pixels, where M ≤ d, the dimension of embedding space. On average, a random context set of size n, taken from an image with N pixels, will contain only nM/N of the informative pixels. In truth, Fig. 3 displays how the information content of a context depends, on average, on the size of that context. Indeed, greedily choosing context pixels results in much faster contraction (Fig. 4).
Learning to contract Posterior contraction is implicitly encouraged by the NP objective Eq. (4). It can be rewritten as
Ez∼qφ(z|xT ,yT ) [log pθ(yT |z,xT )]−DKL(qφ(z|xT ,yT )||qφ(z|xC ,yC)) . (5)
The first term encourages perfect reconstruction of yT , and discourages large variations in z ∼ qφ(z|xT ,yT ), which would result in large variations in predictive log-likelihood. This effect is stronger for larger target sets since there are more target pixels to predict. In practice, C ⊂ T , so the first term also (indirectly) encourages posterior contraction for increasing context sizes. The second term,DKL(qφ(z|xT ,yT )||qφ(z|xC ,yC)), reinforces the contraction by encouraging the context posterior to be close to the target posterior.
Although the objective encourages posterior contraction, the network mechanisms for achieving contraction are not immediately clear. Ultimately, the details depend on interplay between the pixel embedding function, hφ, the pooling operation ⊕, and ρφ. We focus on mean and max pooling. Max pooling As the size of the context set increases, the max-pooled embedding sC = ⊕ni=1si is non-decreasing in n; in a trained NP model, ||sC || will increase each time an informative pixel is added to the context set; it will continue increasing until the context embedding saturates at the full image embedding. At a high level, this property of max-pooling means that the σC component of ρφ(sC) has a relatively simple task: represent a function such that the posterior entropy is a decreasing function of all dimensions of the embedding space. An empirical demonstration that ρφ achieves this can be found in the Supplementary Material.
Mean pooling For a fixed image, as the size of a random context set increases, its mean-pooled embedding will, on average, become closer to the full image embedding. Moreover, the meanpooled embeddings of all possible context sets of the image are contained in the convex set whose
hull is formed by (a subset of) the individual pixel embeddings. The σC component of ρφ(sC), then, must approximate a function such that the posterior entropy is a convex function on the convex set formed by individual pixel embeddings, with minimum at or near the full image embedding. Learning such a function across the embeddings of many training images seems a much harder learning task than that required by max pooling, which may explain the better performance of max pooling relative to mean pooling in NPs (see Section 6).
Generalizing posterior contraction Remarkably, trained NP-based models generalize their posterior contraction to context and target sizes not seen during training (see Fig. 3). The discussion of posterior contraction in NPs using mean and max pooling in the previous paragraphs highlights a shared property: for both models, the pooled embeddings of all possible context sets that can be obtained from an image are in a convex set that is determined by a subset of possible context set embeddings. For max-pooling, the convex set is formed by the max-pooled embedding of the M “activation” pixels. For mean-pooling, the convex set is obtained from the convex hull of the individual pixel embeddings. Furthermore, the full image embedding in both cases is contained in the convex set. We conjecture that a sufficient condition for an NP image completion model to yield posterior contraction that generalizes to context sets of unseen size is as follows: For any image, the pooled embedding of every possible context set (which includes the full image) lies in a convex subset of the embedding space.
5 NETWORK ARCHITECTURE
The network architectures we employ for our experiments build on NPs, inspired by our findings from Section 4. We describe them in detail in this section.
Encoder The encoder qφ(z|xC ,yC) takes input observations from an i.i.d. model (see Fig. 2, plate over C), and therefore its encoding of those observations must be permutation invariant if it is to be learned efficiently. Our qφ, as in related NP work, has a permutation-invariant architecture,
si = hφ(xi, yi), 1 ≤ i ≤ n; sC = ⊕ni=1si; (µC , σC) = ρφ(sC); qφ(z|xC ,yC) = N (µC , σ2C) . Here ρφ and hφ are neural networks and ⊕ is a permutation-invariant pooling operator. Fig. 5 contains diagrams of a generalization of this encoder architecture (see below). The standard NP architecture uses mean pooling; motivated by our findings in Section 4, we also employ max pooling.
Hierarchical Variational Inference In order to achieve better calibrated uncertainty in small context size regimes, a more flexible approximate posterior could be beneficial. tConsider the MNIST experiment shown in Fig. 6. Intuitively, an encoder could learn to map from the context set to a one-dimensional discrete z value that lends support only to those digits that are compatible with the context pixel values at the given context pixel locations (xC ,yC). This suggests that qφ should be flexible enough to produce a multimodal distribution over z, which can be encouraged by making qφ a mixture and corresponds to a hierarchical variational distribution (Ranganath et al., 2016;
Yin & Zhou, 2018; Sobolev & Vetrov, 2019). Specifically, the encoder structure described above, augmented with a mixture variable is
qφ(z|x,y) = ∫ qφ(ψ|x,y)qφ(z|ψ,x,y)dψ . (6)
This is shown in Fig. 5. For parameter-learning, the ELBO in Eq. (2) is targeted. However, the hierarchical structure of the encoder makes this objective intractable. Therefore, a tractable lower bound to the ELBO is used as the objective instead. In particular, the objective is based on semiimplicit variational inference (SIVI) (See Appendix A.3).
Decoder The deep neural network stochastic decoder in our work is standard and not a focus. Like other NP work, the data generating conditional likelihood in our decoder is assumed to factorize in a conditionally independent way, pθ(yT |z,xT ) = ∏m i=1 pθ(y ′ i|z, x′i), where m is the size of the target set and x′i and y ′ i are a target set input and output respectively. Fig. 5b shows the decoder architecture, with the neural network gθ the link function to a per pixel likelihood.
6 EXPERIMENTAL EVALUATION
We follow the experimental setup of Garnelo et al. (2018b), where images are interpreted as functions that map pixel locations to color values, and image in-painting is framed as an amortized
predictive inference task where the latent image-specific regression function needs to be inferred from a small context set of provided pixel values and locations. For ease of comparison to prior work, we use the same MNIST (LeCun et al., 1998) and CelebA (Liu et al., 2015) datasets. Additionally, we run an experiment on FashionMNIST dataset (Xiao et al., 2017). Specific architecture details for all networks are provided in Appendix A and open-source code for all experiments will be released at the time of publication.
Qualitative Results Fig. 6 shows qualitative image in-painting results for MNIST and CelebA images. Qualitative results for FashionMNIST are shown in Appendix D. It is apparent in all three contexts that ANPs perform poorly when the context set is small, despite the superior sharpness of their reconstructions when given large context sets. The sets of digits and faces that ANPs produce are not sharp, realistic, nor diverse. On the other hand, their predecessor, NP (with mean pooling), arguably exhibits more diversity but suffers at all context sizes in terms of realism of the images. Our NP+SIVI with max pooling approach produces results with two important characteristics: 1) the images generated with a small amount of contextual information are sharper and more realistic; and 2) there is high context-set-compatible variability across the i.i.d. samples. These qualitative results demonstrate that max pooling plus the SIVI objective result in posterior mean functions that are sharper and more appropriately diverse, except in the high context set size regime where diversity does not matter and ANP produces much sharper images. Space limitations prohibit showing large collections of samples where the qualitative differences are even more readily apparent. Appendix L contains more comprehensive examples with greater numbers of samples.
Quantitative Results Quantitatively assessing posterior predictive calibration is an open problem (Salimans et al., 2016; Heusel et al., 2017). Table 1 reports, for the different architectures we consider, predictive held out test-data log-likelihoods averaged over 10,000 MNIST, 10,000 FashionMNIST and 19,962 CelebA test images respectively. While the reported results make it clear that max pooling improves held-out test likelihood, likelihood alone does not provide a direct measure of sample quality nor diversity. It simply measures how much mass is put on each ground-truth completion. It is also important to note that in our implementation of ANP, in contrast to its original paper, the observation variance is fixed and that is why ANP performs poorly in Table 1. An ANP model with learned observation variance outperforms all the other models in terms of held-out test likelihood. However, it is empirically shown that learning the observation variance in NP models with a deterministic path (including ANPs) hurts the diversity of generated samples (Le et al., 2018) (see Appendix C for a detailed discussion and additional results for ANP model with learned variance).
Borrowing from the generative adversarial networks community, who have faced the similar problems of how to quantitatively evaluate models via examination of the samples they generate, we
compute inception scores (Salimans et al., 2016) using conditionally generated samples for different context set sizes for all of the considered NP architectures and report them in Fig. 7. Inception score is the mutual information between the generated images and their class labels predicted by a classifier, in particular, inception network (Szegedy et al., 2016). However, since inception network is an ImageNet (Deng et al., 2009) classifier, it is known to lead to misleading inception scores when applied to other image domains (Barratt & Sharma, 2018). We therefore use trained MNIST, FashionMNIST, and CelebA classifiers in place of inception network (He et al., 2016). (See Appendix H for details.) The images used to create the results in Fig. 7 are the same as in Figs. 6 and 11. For each context set size, the reported inception scores are aggregated over 10 different randomly chosen context sets. The dark gray dashed lines are the inception scores of training samples and represent the maximum one might hope to achieve at a context set size of zero (these plots start at one).
For small context sets, an optimally calibrated model should have high uncertainty and therefore generate samples with high diversity, resulting in high inception scores as observed. As the context set grows, sample diversity should be reduced, resulting in lower scores. Here again, architectures using max pooling produce large gains in inception score in low-context size settings. Whether the addition of SIVI is helpful is less clear here (see Appendix I for a discussion on the addition of SIVI). Nonetheless, the inception score is again only correlated with the qualitative gains we observe in Fig. 6.
7 CONCLUSION
The contributions we report in this paper include suggested neural process architectures (max pooling, no deterministic path) and objectives (regular amortized inference versus the heuristic NP objective, SIVI versus non-mixture variational family) that produce qualitatively better calibrated posteriors, particularly in low context cardinality settings. We provide empirical evidence of how natural posterior contraction may be facilitated by the neural process architecture. Finally, we establish quantitative evidence that shows improvements in neural process posterior predictive performance and highlight the need for better metrics for quantitatively evaluating posterior calibration.
We remind the reader that this work, like most other deep learning work, highlights the impact of varying only a small subset of the dimensions of architecture and objective degrees of freedom. We found that, for instance, simply making ρφ deeper than that reported in the literature improved baseline results substantially. The choice of learning rate also had a large impact on the relative gap between the reported alternatives. We report what we believe to be the most robust configuration across all the configurations that we explored: max pooling and SIVI consistently improve performance. | 1. What are the contributions and limitations of the paper on Neural Processes (NPs) models when the amount of conditioning data is limited?
2. How does the paper investigate the role of architecture and objective choices in NPs?
3. What is the significance of the paper's findings regarding calibrated uncertainty in generative modeling?
4. What are some assumptions or statements made in the paper that could benefit from further elaboration or justification?
5. How do the modifications to existing NPs, such as max-pooling and SIVI on the posterior z's, contribute to improving calibration?
6. Can the authors provide more details about how they evaluated "good calibration" and "calibrated uncertainty"?
7. How does the paper relate to prior work on calibration for classification models, such as (Murphy 1973), (Gneiting et al., 2007), and (Kuleshov et al., 2018)?
8. Could the authors clarify their response to the comment in Section 4 about posterior contraction and its relation to exchangeability via the iid latent variable z?
9. Why does sample diversity decrease as the size of the conditioning set grows, intuitively?
10. Shouldn't the arrow going from z -> y_i in Figure 2 be reversed? | Review | Review
Summary: This paper is an empirical investigation into the role of architecture and objective choices in Neural Process (NP) models when the amount of conditioning data is limited. Specifically, they investigate the question of well-calibrated uncertainty.
Clarity: The overall quality of the writing is clear, but missing details/explanations leave some claims unjustified and therefore lines of argument difficult to follow.
Originality: Limited -- the paper is mostly an empirical investigation, with the modifications to existing NPs being (a) max-pooling and (b) SIVI on the posterior z’s.
Significance: Seems limited (see more on “Cons” section and “Questions/comments”), though I am wondering whether the authors could have done more to emphasize the main contributions of this work. The paper’s significance for me has been hampered by lack of details and clear takeaway/implications of the results.
Pros: There’s been a lot of work on NPs and their variants, but less so on empirically investigating how and when they work well. There is also a range of systematic empirical evaluations both in the main text and supplement, which I appreciated.
Cons: There are some assumptions/statements that would be beneficial to elaborate upon in the paper. First, calibrated uncertainty is defined to be “high sample diversity for small conditioning sets; and sharp-looking, realistic images for any size of conditioning set.” This statement, introduced early on in the paper without much justification, is a point that the authors repeatedly return to, and I found myself wondering why (especially since NPs are built for prediction/regression, so a lot of the prior work on calibration for classification models should hold here such as (Murphy 1973), (Gneiting et. al 2007), (Kuleshov et al., 2018)). What is the benefit of reasoning about calibration in the generative modeling case (e.g. with Inception Scores)?
Additionally, why should a more flexible approximate posterior be more beneficial for better calibrated uncertainty? (Is this a log-likelihood argument, since the log-likelihood decomposes to calibration + “sharpness”?) More broadly, my biggest problem was that the paper makes claims about improving calibration (e.g. samples are obtained from “well calibrated posteriors” in Figure 6) without formally defining how to evaluate “good calibration,” what it means to have “calibrated uncertainty,” etc. I would appreciate if the authors could clear up any misunderstandings I may have had about the work.
Questions/comments:
(Heusel et. al 2017) show that the inception score (IS) is not a valid metric for CelebA -- would the authors report FID scores instead?
Regarding the comment in Section 4 about posterior contraction: aren’t NPs exchangeable by design via the iid latent variable z (Korshunova et. al 2019, among many others)? I thought that invoking (conditional) de Finetti via the iid latent variable is what allows NPs to model (exchangeable) stochastic processes.
It would be helpful to formally define “posterior contraction” for the reader -- is it referring to a reduction in posterior uncertainty?
Intuitively, why would sample diversity decrease as the size of the conditioning set grows? For example, if I have a dataset of 10 examples of only cats and dogs and increase its size to 1000 (which say, also includes examples of sheep), shouldn’t that also increase my sample diversity as well?
Shouldn’t the arrow going from z -> y_i in Figure 2 be reversed?
UPDATE: I have read the authors' rebuttal and revised draft and have raised my score to a 5. |
ICLR | Title
Uncertainty in Neural Processes
Abstract
We explore the effects of architecture and training objective choice on amortized posterior predictive inference in probabilistic conditional generative models. We aim this work to be a counterpoint to a recent trend in the literature that stresses achieving good samples when the amount of conditioning data is large. We instead focus our attention on the case where the amount of conditioning data is small. We highlight specific architecture and objective choices that we find lead to qualitative and quantitative improvement to posterior inference in this low data regime. Specifically we explore the effects of choices of pooling operator and variational family on posterior quality in neural processes. Superior posterior predictive samples drawn from our novel neural process architectures are demonstrated via image completion/in-painting experiments.
1 INTRODUCTION
What makes a probabilistic conditional generative model good? The belief that a generative model is good if it produces samples that are indistinguishable from those that it was trained on (Hinton, 2007) is widely accepted, and understandably so. This belief also applies when the generator is conditional, though the standard becomes higher: conditional samples must be indistinguishable from training samples for each value of the condition.
Consider an amortized image in-painting task in which the objective is to fill in missing pixel values given a subset of observed pixel values. If the number and location of observed pixels is fixed, then a good conditional generative model should produce sharp-looking sample images, all of which should be compatible with the observed pixel values. If the number and location of observed pixels is allowed to vary, the same should remain true for each set of observed pixels. Recent work on this problem has focused on reconstructing an entire image from as small a conditioning set as possible. As shown in Fig. 1, state-of-the-art methods (Kim et al., 2018) achieve high-quality reconstruction from as few as 30 conditioning pixels in a 1024-pixel image.
Our work starts by questioning whether reconstructing an image from a small subset of pixels is always the right objective. To illustrate, consider the image completion task on handwritten digits. A small set of pixels might, depending on their locations, rule out the possibility that the full image is, say, 1, 5, or 6. Human-like performance in this case would generate sharp-looking sample images for all digits that are consistent with the observed pixels (i.e., 0, 2-4, and 7-9). Observing additional pixels will rule out successively more digits until the only remaining uncertainty pertains to stylistic details. The bottom-right panel of Fig. 1 demonstrates this type of “calibrated” uncertainty.
We argue that in addition to high-quality reconstruction based on large conditioning sets, amortized conditional inference methods should aim for meaningful, calibrated uncertainty, particularly for small conditioning sets. For different problems, this may mean different things (see discussion in Section 3). In this work, we focus on the image in-painting problem, and define well calibrated uncertainty to be a combination of two qualities: high sample diversity for small conditioning sets; and sharp-looking, realistic images for any size of conditioning set. As the size of the conditioning set grows, we expect the sample diversity to decrease and the quality of the images to increase. We note that this emphasis is different from the current trend in the literature, which has focused primarily on making sharp and accurate image completions when the size of the conditioning context is large (Kim et al., 2018).
To better understand and make progress toward our aim, we employ posterior predictive inference in a conditional generative latent-variable model, with a particular focus on neural processes (NPs)
(Garnelo et al., 2018a;b). We find that particular architecture choices can result in markedly different performance. In order to understand this, we investigate posterior uncertainty in NP models (Section 4), and we use our findings to establish new best practices for NP amortized inference artifacts with well-calibrated uncertainty. In particular, we demonstrate improvements arising from a combination of max pooling, a mixture variational distribution, and a “normal” amortized variational inference objective.
The rest of this paper is organized as follows. Section 2 and Section 3 present background material on amortized inference for generative models and calibrated uncertainty, respectively. Section 4 discusses and presents empirical evidence for how NP models handle uncertainty. Section 5 introduces our proposed network architecture and objective. Section 6 reports our results on the MNIST, FashionMNIST and CelebA datasets. Finally, Section 7 presents our conclusions.
2 AMORTIZED INFERENCE FOR CONDITIONAL GENERATIVE MODELS
Our work builds on amortized inference (Gershman & Goodman, 2014; Kingma & Welling, 2014), probabilistic meta-learning (Gordon et al., 2019), and conditional generative models in the form of neural processes (Garnelo et al., 2018b; Kim et al., 2018). This section provides background.
Let (xC ,yC) = {(xi, yi)}ni=1 and (xT ,yT ) = {(x′j , y′j)}mj=1 be a context set and target set respectively. In image in-painting, the context set input xC is a subset of an image’s pixel coordinates, the context set output yC are the corresponding pixel values (greyscale intensity or colors), the target set input xT is a set of pixel coordinates requiring in-painting, and the target set output yT is the corresponding set of target pixel values. The corresponding graphical model is shown in Fig. 2.
The goal of amortized conditional inference is to rapidly approximate, at “test time,” the posterior predictive distribution
pθ(yT |xT ,xC ,yC) = ∫ pθ(yT |xT , z)pθ(z|xC ,yC)dz . (1)
We can think of the latent variable z as representing a problem-specific task-encoding. The likelihood term pθ(yT |xT , z) shows that the encoding parameterizes a regression model linking the target inputs to the target outputs. In the NP perspective, z is a function and Eq. (1) can be seen as integrating over the regression function itself, as in Gaussian process regression (Rasmussen, 2003).
Variational inference There are two fundamental aims for amortized inference for conditional generative models: learning the model, parameterized by θ, that produces good samples, and producing an amortization artifact, parameterized by φ, that can be used to approximately solve Eq. (1) quickly at test time. Variational inference techniques couple the two learning problems. Let y and x be task-specific output and input sets, respectively, and assume that at training time we know the values of y. We can construct the usual single-training-task evidence lower bound (ELBO) as
log pθ(y|x) ≥ Ez∼qφ(z|x,y) [ log pθ(y|z,x)pθ(z)qφ(z|x,y) ] . (2)
Summing over all training examples and optimizing Eq. (2) with respect to φ learns an amortized inference artifact that takes a context set and returns a task embedding; optimizing with respect to θ learns a problem-specific generative model. Optimizing both simultaneously results in an amortized inference artifact bespoke to the overall problem domain.
At test time the learned model and inference artifacts can be combined to perform amortized posterior predictive inference, approximating Eq. (1) with
pθ(yT |xT ,xC ,yC) ≈ ∫ pθ(yT |xT , z)qφ(z|xC ,yC)dz . (3)
Crucially, given an input (xC ,yC), sampling from this distribution is as simple as sampling a task embedding z from qφ(z|xC ,yC) and then passing the sampled z to the generative model pθ(yT |xT , z) to produce samples from the conditional generative model. Meta-learning The task-specific problem becomes a meta-learning problem when learning a regression model θ that performs well on multiple tasks with the same graphical structure, trained on data for which the target outputs {y′j} are observed as well. In training our in-painting models, following conventions in the literature (Garnelo et al., 2018a;b), tasks are simply random-size subsets of random pixel locations x and values y from training set images. This random subsetting of training images into context and target sets transforms this into a meta-learning problem, and the “encoder” qφ(z|x,y) must learn to generalize over different context set sizes, with less posterior uncertainty as the context set size grows.
Neural processes Our work builds on neural processes (NPs) (Garnelo et al., 2018a;b). NPs are deep neural network conditional generative models. Multiple variants of NPs have been proposed (Garnelo et al., 2018a;b; Kim et al., 2018), and careful empirical comparisons between them appear in the literature (Grover et al., 2019; Le et al., 2018).
NPs employ an alternative training objective to Eq. (2) arising from the fact that the graphical model in Fig. 2 allows a Bayesian update on the distribution of z, conditioning on the entire context set to produce a posterior pθ(z|xC ,yC). If the generative model is in a tractable family that allows analytic updates of this kind, then the NP objective corresponds to maximizing
Ez∼qφ(z|xT ,yT ) [ log pθ(yT |z,xT )pθ(z|xC,yC)qφ(z|xT ,yT ) ] ≈ Ez∼qφ(z|xT ,yT ) [ log pθ(yT |z,xT )qφ(z|xC,yC) qφ(z|xT ,yT ) ] (4)
where replacing pθ(z|xC ,yC) with its variational approximation is typically necessary because most deep neural generative models have a computationally inaccessible posterior. This “NP objective”
can be trained end-to-end, optimizing for both φ and θ simultaneously, where the split of training data into context and target sets must vary in terms of context set size. The choice of optimizing Eq. (4) instead of Eq. (2) is largely empirical (Le et al., 2018).
3 CALIBRATED UNCERTAINTY
Quantifying and calibrating uncertainty in generative models remains an open problem, particularly in the context of amortized inference. Previous work on uncertainty calibration has focused on problems with relatively simpler structure. For example, in classification and regression problems with a single dataset, prior work framed the problem as predicting a cumulative distribution function that is close to the data-generating distribution, first as a model diagnostic (Gneiting et al., 2007) and subsequently as a post-hoc adjustment to a learned predictor (Kuleshov et al., 2018). A version of the latter approach was also applied to structured prediction problems (Kuleshov & Liang, 2015).
Previous approaches are conceptually similar to our working definition of calibrated uncertainty. However, we seek calibrated uncertainty on a per-image, per-conditioning set basis, which is fundamentally different from previous settings. Quantification of all aspects of generative model performance is an area of ongoing research, with uncertainty quantification a particularly challenging problem.
4 UNCERTAINTY IN NEURAL PROCESS MODELS
In this section, we investigate how NP models handle uncertainty. A striking property of NP models is that as the size of the (random) context set increases, there is less sampling variation in target samples generated by passing z ∼ qφ(z|xC ,yC) through the decoder. The samples shown in Fig. 1 are the likelihood mean (hence a deterministic function of z), and so the reduced sampling variation can only be produced by decreased posterior uncertainty. Our experiments confirm this, as shown in Fig. 3a: posterior uncertainty (as measured by entropy) decreases for increasing context size, even beyond the maximum training context size. Such posterior contraction is a well-studied property of classical Bayesian inference and is a consequence of the inductive bias of exchangeable models. However, NP models do not have the same inductive bias explicitly built in. How do trained NP models exhibit posterior contraction without being explicitly designed to do so? How do they learn to do so during training?
A simple hypothesis is that the network somehow transfers the context size through the pooling operation and into ρφ(sC), which uses that information to set the posterior uncertainty. That hypothesis is supported by Fig. 3b, which shows the results of training a classifier to infer the context size given
only sC . However, consider that within a randomly generated context set, some observations are more informative than others. For example, Fig. 4 shows the first {10, 50, 100} pixels of an MNIST digit 2, greedily chosen to minimizeDKL(qφ(z|x,y)||qφ(z|xC ,yC)). If z is interpreted to represent, amongst other things, which digit the image contains, then a small subset of pixels determine which digits are possible.
It is these highly informative pixels that drive posterior contraction in a trained NP. In a random context set, the number of highly informative pixels is random. For example, a max-pooled embedding saturates with the M most highly informative context pixels, where M ≤ d, the dimension of embedding space. On average, a random context set of size n, taken from an image with N pixels, will contain only nM/N of the informative pixels. In truth, Fig. 3 displays how the information content of a context depends, on average, on the size of that context. Indeed, greedily choosing context pixels results in much faster contraction (Fig. 4).
Learning to contract Posterior contraction is implicitly encouraged by the NP objective Eq. (4). It can be rewritten as
Ez∼qφ(z|xT ,yT ) [log pθ(yT |z,xT )]−DKL(qφ(z|xT ,yT )||qφ(z|xC ,yC)) . (5)
The first term encourages perfect reconstruction of yT , and discourages large variations in z ∼ qφ(z|xT ,yT ), which would result in large variations in predictive log-likelihood. This effect is stronger for larger target sets since there are more target pixels to predict. In practice, C ⊂ T , so the first term also (indirectly) encourages posterior contraction for increasing context sizes. The second term,DKL(qφ(z|xT ,yT )||qφ(z|xC ,yC)), reinforces the contraction by encouraging the context posterior to be close to the target posterior.
Although the objective encourages posterior contraction, the network mechanisms for achieving contraction are not immediately clear. Ultimately, the details depend on interplay between the pixel embedding function, hφ, the pooling operation ⊕, and ρφ. We focus on mean and max pooling. Max pooling As the size of the context set increases, the max-pooled embedding sC = ⊕ni=1si is non-decreasing in n; in a trained NP model, ||sC || will increase each time an informative pixel is added to the context set; it will continue increasing until the context embedding saturates at the full image embedding. At a high level, this property of max-pooling means that the σC component of ρφ(sC) has a relatively simple task: represent a function such that the posterior entropy is a decreasing function of all dimensions of the embedding space. An empirical demonstration that ρφ achieves this can be found in the Supplementary Material.
Mean pooling For a fixed image, as the size of a random context set increases, its mean-pooled embedding will, on average, become closer to the full image embedding. Moreover, the meanpooled embeddings of all possible context sets of the image are contained in the convex set whose
hull is formed by (a subset of) the individual pixel embeddings. The σC component of ρφ(sC), then, must approximate a function such that the posterior entropy is a convex function on the convex set formed by individual pixel embeddings, with minimum at or near the full image embedding. Learning such a function across the embeddings of many training images seems a much harder learning task than that required by max pooling, which may explain the better performance of max pooling relative to mean pooling in NPs (see Section 6).
Generalizing posterior contraction Remarkably, trained NP-based models generalize their posterior contraction to context and target sizes not seen during training (see Fig. 3). The discussion of posterior contraction in NPs using mean and max pooling in the previous paragraphs highlights a shared property: for both models, the pooled embeddings of all possible context sets that can be obtained from an image are in a convex set that is determined by a subset of possible context set embeddings. For max-pooling, the convex set is formed by the max-pooled embedding of the M “activation” pixels. For mean-pooling, the convex set is obtained from the convex hull of the individual pixel embeddings. Furthermore, the full image embedding in both cases is contained in the convex set. We conjecture that a sufficient condition for an NP image completion model to yield posterior contraction that generalizes to context sets of unseen size is as follows: For any image, the pooled embedding of every possible context set (which includes the full image) lies in a convex subset of the embedding space.
5 NETWORK ARCHITECTURE
The network architectures we employ for our experiments build on NPs, inspired by our findings from Section 4. We describe them in detail in this section.
Encoder The encoder qφ(z|xC ,yC) takes input observations from an i.i.d. model (see Fig. 2, plate over C), and therefore its encoding of those observations must be permutation invariant if it is to be learned efficiently. Our qφ, as in related NP work, has a permutation-invariant architecture,
si = hφ(xi, yi), 1 ≤ i ≤ n; sC = ⊕ni=1si; (µC , σC) = ρφ(sC); qφ(z|xC ,yC) = N (µC , σ2C) . Here ρφ and hφ are neural networks and ⊕ is a permutation-invariant pooling operator. Fig. 5 contains diagrams of a generalization of this encoder architecture (see below). The standard NP architecture uses mean pooling; motivated by our findings in Section 4, we also employ max pooling.
Hierarchical Variational Inference In order to achieve better calibrated uncertainty in small context size regimes, a more flexible approximate posterior could be beneficial. tConsider the MNIST experiment shown in Fig. 6. Intuitively, an encoder could learn to map from the context set to a one-dimensional discrete z value that lends support only to those digits that are compatible with the context pixel values at the given context pixel locations (xC ,yC). This suggests that qφ should be flexible enough to produce a multimodal distribution over z, which can be encouraged by making qφ a mixture and corresponds to a hierarchical variational distribution (Ranganath et al., 2016;
Yin & Zhou, 2018; Sobolev & Vetrov, 2019). Specifically, the encoder structure described above, augmented with a mixture variable is
qφ(z|x,y) = ∫ qφ(ψ|x,y)qφ(z|ψ,x,y)dψ . (6)
This is shown in Fig. 5. For parameter-learning, the ELBO in Eq. (2) is targeted. However, the hierarchical structure of the encoder makes this objective intractable. Therefore, a tractable lower bound to the ELBO is used as the objective instead. In particular, the objective is based on semiimplicit variational inference (SIVI) (See Appendix A.3).
Decoder The deep neural network stochastic decoder in our work is standard and not a focus. Like other NP work, the data generating conditional likelihood in our decoder is assumed to factorize in a conditionally independent way, pθ(yT |z,xT ) = ∏m i=1 pθ(y ′ i|z, x′i), where m is the size of the target set and x′i and y ′ i are a target set input and output respectively. Fig. 5b shows the decoder architecture, with the neural network gθ the link function to a per pixel likelihood.
6 EXPERIMENTAL EVALUATION
We follow the experimental setup of Garnelo et al. (2018b), where images are interpreted as functions that map pixel locations to color values, and image in-painting is framed as an amortized
predictive inference task where the latent image-specific regression function needs to be inferred from a small context set of provided pixel values and locations. For ease of comparison to prior work, we use the same MNIST (LeCun et al., 1998) and CelebA (Liu et al., 2015) datasets. Additionally, we run an experiment on FashionMNIST dataset (Xiao et al., 2017). Specific architecture details for all networks are provided in Appendix A and open-source code for all experiments will be released at the time of publication.
Qualitative Results Fig. 6 shows qualitative image in-painting results for MNIST and CelebA images. Qualitative results for FashionMNIST are shown in Appendix D. It is apparent in all three contexts that ANPs perform poorly when the context set is small, despite the superior sharpness of their reconstructions when given large context sets. The sets of digits and faces that ANPs produce are not sharp, realistic, nor diverse. On the other hand, their predecessor, NP (with mean pooling), arguably exhibits more diversity but suffers at all context sizes in terms of realism of the images. Our NP+SIVI with max pooling approach produces results with two important characteristics: 1) the images generated with a small amount of contextual information are sharper and more realistic; and 2) there is high context-set-compatible variability across the i.i.d. samples. These qualitative results demonstrate that max pooling plus the SIVI objective result in posterior mean functions that are sharper and more appropriately diverse, except in the high context set size regime where diversity does not matter and ANP produces much sharper images. Space limitations prohibit showing large collections of samples where the qualitative differences are even more readily apparent. Appendix L contains more comprehensive examples with greater numbers of samples.
Quantitative Results Quantitatively assessing posterior predictive calibration is an open problem (Salimans et al., 2016; Heusel et al., 2017). Table 1 reports, for the different architectures we consider, predictive held out test-data log-likelihoods averaged over 10,000 MNIST, 10,000 FashionMNIST and 19,962 CelebA test images respectively. While the reported results make it clear that max pooling improves held-out test likelihood, likelihood alone does not provide a direct measure of sample quality nor diversity. It simply measures how much mass is put on each ground-truth completion. It is also important to note that in our implementation of ANP, in contrast to its original paper, the observation variance is fixed and that is why ANP performs poorly in Table 1. An ANP model with learned observation variance outperforms all the other models in terms of held-out test likelihood. However, it is empirically shown that learning the observation variance in NP models with a deterministic path (including ANPs) hurts the diversity of generated samples (Le et al., 2018) (see Appendix C for a detailed discussion and additional results for ANP model with learned variance).
Borrowing from the generative adversarial networks community, who have faced the similar problems of how to quantitatively evaluate models via examination of the samples they generate, we
compute inception scores (Salimans et al., 2016) using conditionally generated samples for different context set sizes for all of the considered NP architectures and report them in Fig. 7. Inception score is the mutual information between the generated images and their class labels predicted by a classifier, in particular, inception network (Szegedy et al., 2016). However, since inception network is an ImageNet (Deng et al., 2009) classifier, it is known to lead to misleading inception scores when applied to other image domains (Barratt & Sharma, 2018). We therefore use trained MNIST, FashionMNIST, and CelebA classifiers in place of inception network (He et al., 2016). (See Appendix H for details.) The images used to create the results in Fig. 7 are the same as in Figs. 6 and 11. For each context set size, the reported inception scores are aggregated over 10 different randomly chosen context sets. The dark gray dashed lines are the inception scores of training samples and represent the maximum one might hope to achieve at a context set size of zero (these plots start at one).
For small context sets, an optimally calibrated model should have high uncertainty and therefore generate samples with high diversity, resulting in high inception scores as observed. As the context set grows, sample diversity should be reduced, resulting in lower scores. Here again, architectures using max pooling produce large gains in inception score in low-context size settings. Whether the addition of SIVI is helpful is less clear here (see Appendix I for a discussion on the addition of SIVI). Nonetheless, the inception score is again only correlated with the qualitative gains we observe in Fig. 6.
7 CONCLUSION
The contributions we report in this paper include suggested neural process architectures (max pooling, no deterministic path) and objectives (regular amortized inference versus the heuristic NP objective, SIVI versus non-mixture variational family) that produce qualitatively better calibrated posteriors, particularly in low context cardinality settings. We provide empirical evidence of how natural posterior contraction may be facilitated by the neural process architecture. Finally, we establish quantitative evidence that shows improvements in neural process posterior predictive performance and highlight the need for better metrics for quantitatively evaluating posterior calibration.
We remind the reader that this work, like most other deep learning work, highlights the impact of varying only a small subset of the dimensions of architecture and objective degrees of freedom. We found that, for instance, simply making ρφ deeper than that reported in the literature improved baseline results substantially. The choice of learning rate also had a large impact on the relative gap between the reported alternatives. We report what we believe to be the most robust configuration across all the configurations that we explored: max pooling and SIVI consistently improve performance. | 1. What is the main contribution of the paper regarding neural processes?
2. What are the strengths of the proposed modifications, particularly in the experiments section?
3. Do you have concerns about the experiments comparing SIVI+max pooling with NP+max pooling?
4. How does the reviewer assess the choice of using SIVI in the paper?
5. What are the suggestions provided by the reviewer for improving the paper? | Review | Review
The paper aims at increasing the sample diversity of neural processes when the condition set is small, while maintaining visual fidelity. The low-data regime is arguably where neural processes are most interesting, and in that regard the paper is right to turn to this setting. The discussion on how different aggregation functions affect the predictive uncertainty of the neural process is also appreciated, as is the experiment on regressing the size of the condition set based on the latent embedding.
Unfortunately, the experiments section does not paint a clear enough picture. While the experiments show us that the proposed modifications have some benefits, it is not so clear how much each part contributes. Especially the contribution of the SIVI bound is hard to judge. As it stands the paper feels a bit incomplete in this regard. For that reason I cannot recommend accepting the paper at this stage, though I am willing to revise my score based on the authors' response. Specifically, I'd appreciate if the paper could make a clear case for adopting the hierarchical latent variable structure and the SIVI bound, as these add complexity to the method (while the max-aggregator does not).
Pros
The paper deals with a relevant issue. Neural processes are most interesting when the condition set is small, and this scenario has so far been largely ignored.
The discussion on the choice of aggregator is useful, as are the experiments on the variational posterior entropy and the prediction of the context set size.
Cons
It is unclear how much each modification (SIVI and max-pooling) contribute. The experimental results compare SIVI+max pooling with NP+max pooling, but SIVI+mean is omitted. It's also notable that NP+max seems to work better than SIVI+max in the CelebA dataset. Some discussion would be helpful here, as I don't see any reason why a hierarchical latent structure should hurt in any case, barring optimization difficulties.
I am not familiar with SIVI and I don't expect the average reader to be either. I'd appreciate some discussion on the choice of using it.
Other comments
The inception score sounds like the mutual information between the class label and the generated image. I expect that stating this would help some readers.
Perhaps it would help to look at each component in isolation and in different settings, e.g. outside the image domain. I can see how max-pooling might be good for images, while other aggregation methods might have an edge in, e.g. a dataset of robot joint trajectories. Other people have investigated the choice of aggregation method, and reading this work reminded me of work by Soelch et al., which might be interesting to the authors. |
ICLR | Title
Uncertainty in Neural Processes
Abstract
We explore the effects of architecture and training objective choice on amortized posterior predictive inference in probabilistic conditional generative models. We aim this work to be a counterpoint to a recent trend in the literature that stresses achieving good samples when the amount of conditioning data is large. We instead focus our attention on the case where the amount of conditioning data is small. We highlight specific architecture and objective choices that we find lead to qualitative and quantitative improvement to posterior inference in this low data regime. Specifically we explore the effects of choices of pooling operator and variational family on posterior quality in neural processes. Superior posterior predictive samples drawn from our novel neural process architectures are demonstrated via image completion/in-painting experiments.
1 INTRODUCTION
What makes a probabilistic conditional generative model good? The belief that a generative model is good if it produces samples that are indistinguishable from those that it was trained on (Hinton, 2007) is widely accepted, and understandably so. This belief also applies when the generator is conditional, though the standard becomes higher: conditional samples must be indistinguishable from training samples for each value of the condition.
Consider an amortized image in-painting task in which the objective is to fill in missing pixel values given a subset of observed pixel values. If the number and location of observed pixels is fixed, then a good conditional generative model should produce sharp-looking sample images, all of which should be compatible with the observed pixel values. If the number and location of observed pixels is allowed to vary, the same should remain true for each set of observed pixels. Recent work on this problem has focused on reconstructing an entire image from as small a conditioning set as possible. As shown in Fig. 1, state-of-the-art methods (Kim et al., 2018) achieve high-quality reconstruction from as few as 30 conditioning pixels in a 1024-pixel image.
Our work starts by questioning whether reconstructing an image from a small subset of pixels is always the right objective. To illustrate, consider the image completion task on handwritten digits. A small set of pixels might, depending on their locations, rule out the possibility that the full image is, say, 1, 5, or 6. Human-like performance in this case would generate sharp-looking sample images for all digits that are consistent with the observed pixels (i.e., 0, 2-4, and 7-9). Observing additional pixels will rule out successively more digits until the only remaining uncertainty pertains to stylistic details. The bottom-right panel of Fig. 1 demonstrates this type of “calibrated” uncertainty.
We argue that in addition to high-quality reconstruction based on large conditioning sets, amortized conditional inference methods should aim for meaningful, calibrated uncertainty, particularly for small conditioning sets. For different problems, this may mean different things (see discussion in Section 3). In this work, we focus on the image in-painting problem, and define well calibrated uncertainty to be a combination of two qualities: high sample diversity for small conditioning sets; and sharp-looking, realistic images for any size of conditioning set. As the size of the conditioning set grows, we expect the sample diversity to decrease and the quality of the images to increase. We note that this emphasis is different from the current trend in the literature, which has focused primarily on making sharp and accurate image completions when the size of the conditioning context is large (Kim et al., 2018).
To better understand and make progress toward our aim, we employ posterior predictive inference in a conditional generative latent-variable model, with a particular focus on neural processes (NPs)
(Garnelo et al., 2018a;b). We find that particular architecture choices can result in markedly different performance. In order to understand this, we investigate posterior uncertainty in NP models (Section 4), and we use our findings to establish new best practices for NP amortized inference artifacts with well-calibrated uncertainty. In particular, we demonstrate improvements arising from a combination of max pooling, a mixture variational distribution, and a “normal” amortized variational inference objective.
The rest of this paper is organized as follows. Section 2 and Section 3 present background material on amortized inference for generative models and calibrated uncertainty, respectively. Section 4 discusses and presents empirical evidence for how NP models handle uncertainty. Section 5 introduces our proposed network architecture and objective. Section 6 reports our results on the MNIST, FashionMNIST and CelebA datasets. Finally, Section 7 presents our conclusions.
2 AMORTIZED INFERENCE FOR CONDITIONAL GENERATIVE MODELS
Our work builds on amortized inference (Gershman & Goodman, 2014; Kingma & Welling, 2014), probabilistic meta-learning (Gordon et al., 2019), and conditional generative models in the form of neural processes (Garnelo et al., 2018b; Kim et al., 2018). This section provides background.
Let (xC ,yC) = {(xi, yi)}ni=1 and (xT ,yT ) = {(x′j , y′j)}mj=1 be a context set and target set respectively. In image in-painting, the context set input xC is a subset of an image’s pixel coordinates, the context set output yC are the corresponding pixel values (greyscale intensity or colors), the target set input xT is a set of pixel coordinates requiring in-painting, and the target set output yT is the corresponding set of target pixel values. The corresponding graphical model is shown in Fig. 2.
The goal of amortized conditional inference is to rapidly approximate, at “test time,” the posterior predictive distribution
pθ(yT |xT ,xC ,yC) = ∫ pθ(yT |xT , z)pθ(z|xC ,yC)dz . (1)
We can think of the latent variable z as representing a problem-specific task-encoding. The likelihood term pθ(yT |xT , z) shows that the encoding parameterizes a regression model linking the target inputs to the target outputs. In the NP perspective, z is a function and Eq. (1) can be seen as integrating over the regression function itself, as in Gaussian process regression (Rasmussen, 2003).
Variational inference There are two fundamental aims for amortized inference for conditional generative models: learning the model, parameterized by θ, that produces good samples, and producing an amortization artifact, parameterized by φ, that can be used to approximately solve Eq. (1) quickly at test time. Variational inference techniques couple the two learning problems. Let y and x be task-specific output and input sets, respectively, and assume that at training time we know the values of y. We can construct the usual single-training-task evidence lower bound (ELBO) as
log pθ(y|x) ≥ Ez∼qφ(z|x,y) [ log pθ(y|z,x)pθ(z)qφ(z|x,y) ] . (2)
Summing over all training examples and optimizing Eq. (2) with respect to φ learns an amortized inference artifact that takes a context set and returns a task embedding; optimizing with respect to θ learns a problem-specific generative model. Optimizing both simultaneously results in an amortized inference artifact bespoke to the overall problem domain.
At test time the learned model and inference artifacts can be combined to perform amortized posterior predictive inference, approximating Eq. (1) with
pθ(yT |xT ,xC ,yC) ≈ ∫ pθ(yT |xT , z)qφ(z|xC ,yC)dz . (3)
Crucially, given an input (xC ,yC), sampling from this distribution is as simple as sampling a task embedding z from qφ(z|xC ,yC) and then passing the sampled z to the generative model pθ(yT |xT , z) to produce samples from the conditional generative model. Meta-learning The task-specific problem becomes a meta-learning problem when learning a regression model θ that performs well on multiple tasks with the same graphical structure, trained on data for which the target outputs {y′j} are observed as well. In training our in-painting models, following conventions in the literature (Garnelo et al., 2018a;b), tasks are simply random-size subsets of random pixel locations x and values y from training set images. This random subsetting of training images into context and target sets transforms this into a meta-learning problem, and the “encoder” qφ(z|x,y) must learn to generalize over different context set sizes, with less posterior uncertainty as the context set size grows.
Neural processes Our work builds on neural processes (NPs) (Garnelo et al., 2018a;b). NPs are deep neural network conditional generative models. Multiple variants of NPs have been proposed (Garnelo et al., 2018a;b; Kim et al., 2018), and careful empirical comparisons between them appear in the literature (Grover et al., 2019; Le et al., 2018).
NPs employ an alternative training objective to Eq. (2) arising from the fact that the graphical model in Fig. 2 allows a Bayesian update on the distribution of z, conditioning on the entire context set to produce a posterior pθ(z|xC ,yC). If the generative model is in a tractable family that allows analytic updates of this kind, then the NP objective corresponds to maximizing
Ez∼qφ(z|xT ,yT ) [ log pθ(yT |z,xT )pθ(z|xC,yC)qφ(z|xT ,yT ) ] ≈ Ez∼qφ(z|xT ,yT ) [ log pθ(yT |z,xT )qφ(z|xC,yC) qφ(z|xT ,yT ) ] (4)
where replacing pθ(z|xC ,yC) with its variational approximation is typically necessary because most deep neural generative models have a computationally inaccessible posterior. This “NP objective”
can be trained end-to-end, optimizing for both φ and θ simultaneously, where the split of training data into context and target sets must vary in terms of context set size. The choice of optimizing Eq. (4) instead of Eq. (2) is largely empirical (Le et al., 2018).
3 CALIBRATED UNCERTAINTY
Quantifying and calibrating uncertainty in generative models remains an open problem, particularly in the context of amortized inference. Previous work on uncertainty calibration has focused on problems with relatively simpler structure. For example, in classification and regression problems with a single dataset, prior work framed the problem as predicting a cumulative distribution function that is close to the data-generating distribution, first as a model diagnostic (Gneiting et al., 2007) and subsequently as a post-hoc adjustment to a learned predictor (Kuleshov et al., 2018). A version of the latter approach was also applied to structured prediction problems (Kuleshov & Liang, 2015).
Previous approaches are conceptually similar to our working definition of calibrated uncertainty. However, we seek calibrated uncertainty on a per-image, per-conditioning set basis, which is fundamentally different from previous settings. Quantification of all aspects of generative model performance is an area of ongoing research, with uncertainty quantification a particularly challenging problem.
4 UNCERTAINTY IN NEURAL PROCESS MODELS
In this section, we investigate how NP models handle uncertainty. A striking property of NP models is that as the size of the (random) context set increases, there is less sampling variation in target samples generated by passing z ∼ qφ(z|xC ,yC) through the decoder. The samples shown in Fig. 1 are the likelihood mean (hence a deterministic function of z), and so the reduced sampling variation can only be produced by decreased posterior uncertainty. Our experiments confirm this, as shown in Fig. 3a: posterior uncertainty (as measured by entropy) decreases for increasing context size, even beyond the maximum training context size. Such posterior contraction is a well-studied property of classical Bayesian inference and is a consequence of the inductive bias of exchangeable models. However, NP models do not have the same inductive bias explicitly built in. How do trained NP models exhibit posterior contraction without being explicitly designed to do so? How do they learn to do so during training?
A simple hypothesis is that the network somehow transfers the context size through the pooling operation and into ρφ(sC), which uses that information to set the posterior uncertainty. That hypothesis is supported by Fig. 3b, which shows the results of training a classifier to infer the context size given
only sC . However, consider that within a randomly generated context set, some observations are more informative than others. For example, Fig. 4 shows the first {10, 50, 100} pixels of an MNIST digit 2, greedily chosen to minimizeDKL(qφ(z|x,y)||qφ(z|xC ,yC)). If z is interpreted to represent, amongst other things, which digit the image contains, then a small subset of pixels determine which digits are possible.
It is these highly informative pixels that drive posterior contraction in a trained NP. In a random context set, the number of highly informative pixels is random. For example, a max-pooled embedding saturates with the M most highly informative context pixels, where M ≤ d, the dimension of embedding space. On average, a random context set of size n, taken from an image with N pixels, will contain only nM/N of the informative pixels. In truth, Fig. 3 displays how the information content of a context depends, on average, on the size of that context. Indeed, greedily choosing context pixels results in much faster contraction (Fig. 4).
Learning to contract Posterior contraction is implicitly encouraged by the NP objective Eq. (4). It can be rewritten as
Ez∼qφ(z|xT ,yT ) [log pθ(yT |z,xT )]−DKL(qφ(z|xT ,yT )||qφ(z|xC ,yC)) . (5)
The first term encourages perfect reconstruction of yT , and discourages large variations in z ∼ qφ(z|xT ,yT ), which would result in large variations in predictive log-likelihood. This effect is stronger for larger target sets since there are more target pixels to predict. In practice, C ⊂ T , so the first term also (indirectly) encourages posterior contraction for increasing context sizes. The second term,DKL(qφ(z|xT ,yT )||qφ(z|xC ,yC)), reinforces the contraction by encouraging the context posterior to be close to the target posterior.
Although the objective encourages posterior contraction, the network mechanisms for achieving contraction are not immediately clear. Ultimately, the details depend on interplay between the pixel embedding function, hφ, the pooling operation ⊕, and ρφ. We focus on mean and max pooling. Max pooling As the size of the context set increases, the max-pooled embedding sC = ⊕ni=1si is non-decreasing in n; in a trained NP model, ||sC || will increase each time an informative pixel is added to the context set; it will continue increasing until the context embedding saturates at the full image embedding. At a high level, this property of max-pooling means that the σC component of ρφ(sC) has a relatively simple task: represent a function such that the posterior entropy is a decreasing function of all dimensions of the embedding space. An empirical demonstration that ρφ achieves this can be found in the Supplementary Material.
Mean pooling For a fixed image, as the size of a random context set increases, its mean-pooled embedding will, on average, become closer to the full image embedding. Moreover, the meanpooled embeddings of all possible context sets of the image are contained in the convex set whose
hull is formed by (a subset of) the individual pixel embeddings. The σC component of ρφ(sC), then, must approximate a function such that the posterior entropy is a convex function on the convex set formed by individual pixel embeddings, with minimum at or near the full image embedding. Learning such a function across the embeddings of many training images seems a much harder learning task than that required by max pooling, which may explain the better performance of max pooling relative to mean pooling in NPs (see Section 6).
Generalizing posterior contraction Remarkably, trained NP-based models generalize their posterior contraction to context and target sizes not seen during training (see Fig. 3). The discussion of posterior contraction in NPs using mean and max pooling in the previous paragraphs highlights a shared property: for both models, the pooled embeddings of all possible context sets that can be obtained from an image are in a convex set that is determined by a subset of possible context set embeddings. For max-pooling, the convex set is formed by the max-pooled embedding of the M “activation” pixels. For mean-pooling, the convex set is obtained from the convex hull of the individual pixel embeddings. Furthermore, the full image embedding in both cases is contained in the convex set. We conjecture that a sufficient condition for an NP image completion model to yield posterior contraction that generalizes to context sets of unseen size is as follows: For any image, the pooled embedding of every possible context set (which includes the full image) lies in a convex subset of the embedding space.
5 NETWORK ARCHITECTURE
The network architectures we employ for our experiments build on NPs, inspired by our findings from Section 4. We describe them in detail in this section.
Encoder The encoder qφ(z|xC ,yC) takes input observations from an i.i.d. model (see Fig. 2, plate over C), and therefore its encoding of those observations must be permutation invariant if it is to be learned efficiently. Our qφ, as in related NP work, has a permutation-invariant architecture,
si = hφ(xi, yi), 1 ≤ i ≤ n; sC = ⊕ni=1si; (µC , σC) = ρφ(sC); qφ(z|xC ,yC) = N (µC , σ2C) . Here ρφ and hφ are neural networks and ⊕ is a permutation-invariant pooling operator. Fig. 5 contains diagrams of a generalization of this encoder architecture (see below). The standard NP architecture uses mean pooling; motivated by our findings in Section 4, we also employ max pooling.
Hierarchical Variational Inference In order to achieve better calibrated uncertainty in small context size regimes, a more flexible approximate posterior could be beneficial. tConsider the MNIST experiment shown in Fig. 6. Intuitively, an encoder could learn to map from the context set to a one-dimensional discrete z value that lends support only to those digits that are compatible with the context pixel values at the given context pixel locations (xC ,yC). This suggests that qφ should be flexible enough to produce a multimodal distribution over z, which can be encouraged by making qφ a mixture and corresponds to a hierarchical variational distribution (Ranganath et al., 2016;
Yin & Zhou, 2018; Sobolev & Vetrov, 2019). Specifically, the encoder structure described above, augmented with a mixture variable is
qφ(z|x,y) = ∫ qφ(ψ|x,y)qφ(z|ψ,x,y)dψ . (6)
This is shown in Fig. 5. For parameter-learning, the ELBO in Eq. (2) is targeted. However, the hierarchical structure of the encoder makes this objective intractable. Therefore, a tractable lower bound to the ELBO is used as the objective instead. In particular, the objective is based on semiimplicit variational inference (SIVI) (See Appendix A.3).
Decoder The deep neural network stochastic decoder in our work is standard and not a focus. Like other NP work, the data generating conditional likelihood in our decoder is assumed to factorize in a conditionally independent way, pθ(yT |z,xT ) = ∏m i=1 pθ(y ′ i|z, x′i), where m is the size of the target set and x′i and y ′ i are a target set input and output respectively. Fig. 5b shows the decoder architecture, with the neural network gθ the link function to a per pixel likelihood.
6 EXPERIMENTAL EVALUATION
We follow the experimental setup of Garnelo et al. (2018b), where images are interpreted as functions that map pixel locations to color values, and image in-painting is framed as an amortized
predictive inference task where the latent image-specific regression function needs to be inferred from a small context set of provided pixel values and locations. For ease of comparison to prior work, we use the same MNIST (LeCun et al., 1998) and CelebA (Liu et al., 2015) datasets. Additionally, we run an experiment on FashionMNIST dataset (Xiao et al., 2017). Specific architecture details for all networks are provided in Appendix A and open-source code for all experiments will be released at the time of publication.
Qualitative Results Fig. 6 shows qualitative image in-painting results for MNIST and CelebA images. Qualitative results for FashionMNIST are shown in Appendix D. It is apparent in all three contexts that ANPs perform poorly when the context set is small, despite the superior sharpness of their reconstructions when given large context sets. The sets of digits and faces that ANPs produce are not sharp, realistic, nor diverse. On the other hand, their predecessor, NP (with mean pooling), arguably exhibits more diversity but suffers at all context sizes in terms of realism of the images. Our NP+SIVI with max pooling approach produces results with two important characteristics: 1) the images generated with a small amount of contextual information are sharper and more realistic; and 2) there is high context-set-compatible variability across the i.i.d. samples. These qualitative results demonstrate that max pooling plus the SIVI objective result in posterior mean functions that are sharper and more appropriately diverse, except in the high context set size regime where diversity does not matter and ANP produces much sharper images. Space limitations prohibit showing large collections of samples where the qualitative differences are even more readily apparent. Appendix L contains more comprehensive examples with greater numbers of samples.
Quantitative Results Quantitatively assessing posterior predictive calibration is an open problem (Salimans et al., 2016; Heusel et al., 2017). Table 1 reports, for the different architectures we consider, predictive held out test-data log-likelihoods averaged over 10,000 MNIST, 10,000 FashionMNIST and 19,962 CelebA test images respectively. While the reported results make it clear that max pooling improves held-out test likelihood, likelihood alone does not provide a direct measure of sample quality nor diversity. It simply measures how much mass is put on each ground-truth completion. It is also important to note that in our implementation of ANP, in contrast to its original paper, the observation variance is fixed and that is why ANP performs poorly in Table 1. An ANP model with learned observation variance outperforms all the other models in terms of held-out test likelihood. However, it is empirically shown that learning the observation variance in NP models with a deterministic path (including ANPs) hurts the diversity of generated samples (Le et al., 2018) (see Appendix C for a detailed discussion and additional results for ANP model with learned variance).
Borrowing from the generative adversarial networks community, who have faced the similar problems of how to quantitatively evaluate models via examination of the samples they generate, we
compute inception scores (Salimans et al., 2016) using conditionally generated samples for different context set sizes for all of the considered NP architectures and report them in Fig. 7. Inception score is the mutual information between the generated images and their class labels predicted by a classifier, in particular, inception network (Szegedy et al., 2016). However, since inception network is an ImageNet (Deng et al., 2009) classifier, it is known to lead to misleading inception scores when applied to other image domains (Barratt & Sharma, 2018). We therefore use trained MNIST, FashionMNIST, and CelebA classifiers in place of inception network (He et al., 2016). (See Appendix H for details.) The images used to create the results in Fig. 7 are the same as in Figs. 6 and 11. For each context set size, the reported inception scores are aggregated over 10 different randomly chosen context sets. The dark gray dashed lines are the inception scores of training samples and represent the maximum one might hope to achieve at a context set size of zero (these plots start at one).
For small context sets, an optimally calibrated model should have high uncertainty and therefore generate samples with high diversity, resulting in high inception scores as observed. As the context set grows, sample diversity should be reduced, resulting in lower scores. Here again, architectures using max pooling produce large gains in inception score in low-context size settings. Whether the addition of SIVI is helpful is less clear here (see Appendix I for a discussion on the addition of SIVI). Nonetheless, the inception score is again only correlated with the qualitative gains we observe in Fig. 6.
7 CONCLUSION
The contributions we report in this paper include suggested neural process architectures (max pooling, no deterministic path) and objectives (regular amortized inference versus the heuristic NP objective, SIVI versus non-mixture variational family) that produce qualitatively better calibrated posteriors, particularly in low context cardinality settings. We provide empirical evidence of how natural posterior contraction may be facilitated by the neural process architecture. Finally, we establish quantitative evidence that shows improvements in neural process posterior predictive performance and highlight the need for better metrics for quantitatively evaluating posterior calibration.
We remind the reader that this work, like most other deep learning work, highlights the impact of varying only a small subset of the dimensions of architecture and objective degrees of freedom. We found that, for instance, simply making ρφ deeper than that reported in the literature improved baseline results substantially. The choice of learning rate also had a large impact on the relative gap between the reported alternatives. We report what we believe to be the most robust configuration across all the configurations that we explored: max pooling and SIVI consistently improve performance. | 1. What is the focus of the paper regarding neural networks?
2. What are the strengths and weaknesses of the proposed approach compared to traditional methods?
3. What is the novel aspect introduced by the paper in the context of Bayesian inference?
4. How does the reviewer assess the significance of the paper's contribution to the field of machine learning? | Review | Review
This paper proposes an improvement of the standard NP by using a mixture distribution \q_{\phi}, semi-implicit variational inference, and max pooling to capture the multimodel structure of the posterior distribution. Replacing one normal Gaussian distribution with a mixture (of Gaussians, normally) is a widely-adopted idea in latent variable models including NP; the adopted semi-implicit variational inference was originally developed in Yin and Zhou ICML 2018, and no further improvement on this inference method is proposed in this manuscript; max pooling is one of three commonly used pooling methods, i.e., max, min, and mean pooling. using one of them to replace another is simple but the explanation of the reason why max pooling is better is interesting and profound. So, the improvement is weak although it is shown to be effective by the empirical study. More importantly, the authors have investigated the posterior contraction of NP. It is interesting. The relationship between the two parts of the objective function of NP has been discussed related to the posterior contraction, both parts have contributed to the contraction apart from their classical explanation on reconstruction and regularization. To my best knowledge, it is the first work to discuss the posterior contraction of NP. It is a classical property in Bayesian and this link will enable further theoretical analysis for NP. |
ICLR | Title
Uncertainty in Neural Processes
Abstract
We explore the effects of architecture and training objective choice on amortized posterior predictive inference in probabilistic conditional generative models. We aim this work to be a counterpoint to a recent trend in the literature that stresses achieving good samples when the amount of conditioning data is large. We instead focus our attention on the case where the amount of conditioning data is small. We highlight specific architecture and objective choices that we find lead to qualitative and quantitative improvement to posterior inference in this low data regime. Specifically we explore the effects of choices of pooling operator and variational family on posterior quality in neural processes. Superior posterior predictive samples drawn from our novel neural process architectures are demonstrated via image completion/in-painting experiments.
1 INTRODUCTION
What makes a probabilistic conditional generative model good? The belief that a generative model is good if it produces samples that are indistinguishable from those that it was trained on (Hinton, 2007) is widely accepted, and understandably so. This belief also applies when the generator is conditional, though the standard becomes higher: conditional samples must be indistinguishable from training samples for each value of the condition.
Consider an amortized image in-painting task in which the objective is to fill in missing pixel values given a subset of observed pixel values. If the number and location of observed pixels is fixed, then a good conditional generative model should produce sharp-looking sample images, all of which should be compatible with the observed pixel values. If the number and location of observed pixels is allowed to vary, the same should remain true for each set of observed pixels. Recent work on this problem has focused on reconstructing an entire image from as small a conditioning set as possible. As shown in Fig. 1, state-of-the-art methods (Kim et al., 2018) achieve high-quality reconstruction from as few as 30 conditioning pixels in a 1024-pixel image.
Our work starts by questioning whether reconstructing an image from a small subset of pixels is always the right objective. To illustrate, consider the image completion task on handwritten digits. A small set of pixels might, depending on their locations, rule out the possibility that the full image is, say, 1, 5, or 6. Human-like performance in this case would generate sharp-looking sample images for all digits that are consistent with the observed pixels (i.e., 0, 2-4, and 7-9). Observing additional pixels will rule out successively more digits until the only remaining uncertainty pertains to stylistic details. The bottom-right panel of Fig. 1 demonstrates this type of “calibrated” uncertainty.
We argue that in addition to high-quality reconstruction based on large conditioning sets, amortized conditional inference methods should aim for meaningful, calibrated uncertainty, particularly for small conditioning sets. For different problems, this may mean different things (see discussion in Section 3). In this work, we focus on the image in-painting problem, and define well calibrated uncertainty to be a combination of two qualities: high sample diversity for small conditioning sets; and sharp-looking, realistic images for any size of conditioning set. As the size of the conditioning set grows, we expect the sample diversity to decrease and the quality of the images to increase. We note that this emphasis is different from the current trend in the literature, which has focused primarily on making sharp and accurate image completions when the size of the conditioning context is large (Kim et al., 2018).
To better understand and make progress toward our aim, we employ posterior predictive inference in a conditional generative latent-variable model, with a particular focus on neural processes (NPs)
(Garnelo et al., 2018a;b). We find that particular architecture choices can result in markedly different performance. In order to understand this, we investigate posterior uncertainty in NP models (Section 4), and we use our findings to establish new best practices for NP amortized inference artifacts with well-calibrated uncertainty. In particular, we demonstrate improvements arising from a combination of max pooling, a mixture variational distribution, and a “normal” amortized variational inference objective.
The rest of this paper is organized as follows. Section 2 and Section 3 present background material on amortized inference for generative models and calibrated uncertainty, respectively. Section 4 discusses and presents empirical evidence for how NP models handle uncertainty. Section 5 introduces our proposed network architecture and objective. Section 6 reports our results on the MNIST, FashionMNIST and CelebA datasets. Finally, Section 7 presents our conclusions.
2 AMORTIZED INFERENCE FOR CONDITIONAL GENERATIVE MODELS
Our work builds on amortized inference (Gershman & Goodman, 2014; Kingma & Welling, 2014), probabilistic meta-learning (Gordon et al., 2019), and conditional generative models in the form of neural processes (Garnelo et al., 2018b; Kim et al., 2018). This section provides background.
Let (xC ,yC) = {(xi, yi)}ni=1 and (xT ,yT ) = {(x′j , y′j)}mj=1 be a context set and target set respectively. In image in-painting, the context set input xC is a subset of an image’s pixel coordinates, the context set output yC are the corresponding pixel values (greyscale intensity or colors), the target set input xT is a set of pixel coordinates requiring in-painting, and the target set output yT is the corresponding set of target pixel values. The corresponding graphical model is shown in Fig. 2.
The goal of amortized conditional inference is to rapidly approximate, at “test time,” the posterior predictive distribution
pθ(yT |xT ,xC ,yC) = ∫ pθ(yT |xT , z)pθ(z|xC ,yC)dz . (1)
We can think of the latent variable z as representing a problem-specific task-encoding. The likelihood term pθ(yT |xT , z) shows that the encoding parameterizes a regression model linking the target inputs to the target outputs. In the NP perspective, z is a function and Eq. (1) can be seen as integrating over the regression function itself, as in Gaussian process regression (Rasmussen, 2003).
Variational inference There are two fundamental aims for amortized inference for conditional generative models: learning the model, parameterized by θ, that produces good samples, and producing an amortization artifact, parameterized by φ, that can be used to approximately solve Eq. (1) quickly at test time. Variational inference techniques couple the two learning problems. Let y and x be task-specific output and input sets, respectively, and assume that at training time we know the values of y. We can construct the usual single-training-task evidence lower bound (ELBO) as
log pθ(y|x) ≥ Ez∼qφ(z|x,y) [ log pθ(y|z,x)pθ(z)qφ(z|x,y) ] . (2)
Summing over all training examples and optimizing Eq. (2) with respect to φ learns an amortized inference artifact that takes a context set and returns a task embedding; optimizing with respect to θ learns a problem-specific generative model. Optimizing both simultaneously results in an amortized inference artifact bespoke to the overall problem domain.
At test time the learned model and inference artifacts can be combined to perform amortized posterior predictive inference, approximating Eq. (1) with
pθ(yT |xT ,xC ,yC) ≈ ∫ pθ(yT |xT , z)qφ(z|xC ,yC)dz . (3)
Crucially, given an input (xC ,yC), sampling from this distribution is as simple as sampling a task embedding z from qφ(z|xC ,yC) and then passing the sampled z to the generative model pθ(yT |xT , z) to produce samples from the conditional generative model. Meta-learning The task-specific problem becomes a meta-learning problem when learning a regression model θ that performs well on multiple tasks with the same graphical structure, trained on data for which the target outputs {y′j} are observed as well. In training our in-painting models, following conventions in the literature (Garnelo et al., 2018a;b), tasks are simply random-size subsets of random pixel locations x and values y from training set images. This random subsetting of training images into context and target sets transforms this into a meta-learning problem, and the “encoder” qφ(z|x,y) must learn to generalize over different context set sizes, with less posterior uncertainty as the context set size grows.
Neural processes Our work builds on neural processes (NPs) (Garnelo et al., 2018a;b). NPs are deep neural network conditional generative models. Multiple variants of NPs have been proposed (Garnelo et al., 2018a;b; Kim et al., 2018), and careful empirical comparisons between them appear in the literature (Grover et al., 2019; Le et al., 2018).
NPs employ an alternative training objective to Eq. (2) arising from the fact that the graphical model in Fig. 2 allows a Bayesian update on the distribution of z, conditioning on the entire context set to produce a posterior pθ(z|xC ,yC). If the generative model is in a tractable family that allows analytic updates of this kind, then the NP objective corresponds to maximizing
Ez∼qφ(z|xT ,yT ) [ log pθ(yT |z,xT )pθ(z|xC,yC)qφ(z|xT ,yT ) ] ≈ Ez∼qφ(z|xT ,yT ) [ log pθ(yT |z,xT )qφ(z|xC,yC) qφ(z|xT ,yT ) ] (4)
where replacing pθ(z|xC ,yC) with its variational approximation is typically necessary because most deep neural generative models have a computationally inaccessible posterior. This “NP objective”
can be trained end-to-end, optimizing for both φ and θ simultaneously, where the split of training data into context and target sets must vary in terms of context set size. The choice of optimizing Eq. (4) instead of Eq. (2) is largely empirical (Le et al., 2018).
3 CALIBRATED UNCERTAINTY
Quantifying and calibrating uncertainty in generative models remains an open problem, particularly in the context of amortized inference. Previous work on uncertainty calibration has focused on problems with relatively simpler structure. For example, in classification and regression problems with a single dataset, prior work framed the problem as predicting a cumulative distribution function that is close to the data-generating distribution, first as a model diagnostic (Gneiting et al., 2007) and subsequently as a post-hoc adjustment to a learned predictor (Kuleshov et al., 2018). A version of the latter approach was also applied to structured prediction problems (Kuleshov & Liang, 2015).
Previous approaches are conceptually similar to our working definition of calibrated uncertainty. However, we seek calibrated uncertainty on a per-image, per-conditioning set basis, which is fundamentally different from previous settings. Quantification of all aspects of generative model performance is an area of ongoing research, with uncertainty quantification a particularly challenging problem.
4 UNCERTAINTY IN NEURAL PROCESS MODELS
In this section, we investigate how NP models handle uncertainty. A striking property of NP models is that as the size of the (random) context set increases, there is less sampling variation in target samples generated by passing z ∼ qφ(z|xC ,yC) through the decoder. The samples shown in Fig. 1 are the likelihood mean (hence a deterministic function of z), and so the reduced sampling variation can only be produced by decreased posterior uncertainty. Our experiments confirm this, as shown in Fig. 3a: posterior uncertainty (as measured by entropy) decreases for increasing context size, even beyond the maximum training context size. Such posterior contraction is a well-studied property of classical Bayesian inference and is a consequence of the inductive bias of exchangeable models. However, NP models do not have the same inductive bias explicitly built in. How do trained NP models exhibit posterior contraction without being explicitly designed to do so? How do they learn to do so during training?
A simple hypothesis is that the network somehow transfers the context size through the pooling operation and into ρφ(sC), which uses that information to set the posterior uncertainty. That hypothesis is supported by Fig. 3b, which shows the results of training a classifier to infer the context size given
only sC . However, consider that within a randomly generated context set, some observations are more informative than others. For example, Fig. 4 shows the first {10, 50, 100} pixels of an MNIST digit 2, greedily chosen to minimizeDKL(qφ(z|x,y)||qφ(z|xC ,yC)). If z is interpreted to represent, amongst other things, which digit the image contains, then a small subset of pixels determine which digits are possible.
It is these highly informative pixels that drive posterior contraction in a trained NP. In a random context set, the number of highly informative pixels is random. For example, a max-pooled embedding saturates with the M most highly informative context pixels, where M ≤ d, the dimension of embedding space. On average, a random context set of size n, taken from an image with N pixels, will contain only nM/N of the informative pixels. In truth, Fig. 3 displays how the information content of a context depends, on average, on the size of that context. Indeed, greedily choosing context pixels results in much faster contraction (Fig. 4).
Learning to contract Posterior contraction is implicitly encouraged by the NP objective Eq. (4). It can be rewritten as
Ez∼qφ(z|xT ,yT ) [log pθ(yT |z,xT )]−DKL(qφ(z|xT ,yT )||qφ(z|xC ,yC)) . (5)
The first term encourages perfect reconstruction of yT , and discourages large variations in z ∼ qφ(z|xT ,yT ), which would result in large variations in predictive log-likelihood. This effect is stronger for larger target sets since there are more target pixels to predict. In practice, C ⊂ T , so the first term also (indirectly) encourages posterior contraction for increasing context sizes. The second term,DKL(qφ(z|xT ,yT )||qφ(z|xC ,yC)), reinforces the contraction by encouraging the context posterior to be close to the target posterior.
Although the objective encourages posterior contraction, the network mechanisms for achieving contraction are not immediately clear. Ultimately, the details depend on interplay between the pixel embedding function, hφ, the pooling operation ⊕, and ρφ. We focus on mean and max pooling. Max pooling As the size of the context set increases, the max-pooled embedding sC = ⊕ni=1si is non-decreasing in n; in a trained NP model, ||sC || will increase each time an informative pixel is added to the context set; it will continue increasing until the context embedding saturates at the full image embedding. At a high level, this property of max-pooling means that the σC component of ρφ(sC) has a relatively simple task: represent a function such that the posterior entropy is a decreasing function of all dimensions of the embedding space. An empirical demonstration that ρφ achieves this can be found in the Supplementary Material.
Mean pooling For a fixed image, as the size of a random context set increases, its mean-pooled embedding will, on average, become closer to the full image embedding. Moreover, the meanpooled embeddings of all possible context sets of the image are contained in the convex set whose
hull is formed by (a subset of) the individual pixel embeddings. The σC component of ρφ(sC), then, must approximate a function such that the posterior entropy is a convex function on the convex set formed by individual pixel embeddings, with minimum at or near the full image embedding. Learning such a function across the embeddings of many training images seems a much harder learning task than that required by max pooling, which may explain the better performance of max pooling relative to mean pooling in NPs (see Section 6).
Generalizing posterior contraction Remarkably, trained NP-based models generalize their posterior contraction to context and target sizes not seen during training (see Fig. 3). The discussion of posterior contraction in NPs using mean and max pooling in the previous paragraphs highlights a shared property: for both models, the pooled embeddings of all possible context sets that can be obtained from an image are in a convex set that is determined by a subset of possible context set embeddings. For max-pooling, the convex set is formed by the max-pooled embedding of the M “activation” pixels. For mean-pooling, the convex set is obtained from the convex hull of the individual pixel embeddings. Furthermore, the full image embedding in both cases is contained in the convex set. We conjecture that a sufficient condition for an NP image completion model to yield posterior contraction that generalizes to context sets of unseen size is as follows: For any image, the pooled embedding of every possible context set (which includes the full image) lies in a convex subset of the embedding space.
5 NETWORK ARCHITECTURE
The network architectures we employ for our experiments build on NPs, inspired by our findings from Section 4. We describe them in detail in this section.
Encoder The encoder qφ(z|xC ,yC) takes input observations from an i.i.d. model (see Fig. 2, plate over C), and therefore its encoding of those observations must be permutation invariant if it is to be learned efficiently. Our qφ, as in related NP work, has a permutation-invariant architecture,
si = hφ(xi, yi), 1 ≤ i ≤ n; sC = ⊕ni=1si; (µC , σC) = ρφ(sC); qφ(z|xC ,yC) = N (µC , σ2C) . Here ρφ and hφ are neural networks and ⊕ is a permutation-invariant pooling operator. Fig. 5 contains diagrams of a generalization of this encoder architecture (see below). The standard NP architecture uses mean pooling; motivated by our findings in Section 4, we also employ max pooling.
Hierarchical Variational Inference In order to achieve better calibrated uncertainty in small context size regimes, a more flexible approximate posterior could be beneficial. tConsider the MNIST experiment shown in Fig. 6. Intuitively, an encoder could learn to map from the context set to a one-dimensional discrete z value that lends support only to those digits that are compatible with the context pixel values at the given context pixel locations (xC ,yC). This suggests that qφ should be flexible enough to produce a multimodal distribution over z, which can be encouraged by making qφ a mixture and corresponds to a hierarchical variational distribution (Ranganath et al., 2016;
Yin & Zhou, 2018; Sobolev & Vetrov, 2019). Specifically, the encoder structure described above, augmented with a mixture variable is
qφ(z|x,y) = ∫ qφ(ψ|x,y)qφ(z|ψ,x,y)dψ . (6)
This is shown in Fig. 5. For parameter-learning, the ELBO in Eq. (2) is targeted. However, the hierarchical structure of the encoder makes this objective intractable. Therefore, a tractable lower bound to the ELBO is used as the objective instead. In particular, the objective is based on semiimplicit variational inference (SIVI) (See Appendix A.3).
Decoder The deep neural network stochastic decoder in our work is standard and not a focus. Like other NP work, the data generating conditional likelihood in our decoder is assumed to factorize in a conditionally independent way, pθ(yT |z,xT ) = ∏m i=1 pθ(y ′ i|z, x′i), where m is the size of the target set and x′i and y ′ i are a target set input and output respectively. Fig. 5b shows the decoder architecture, with the neural network gθ the link function to a per pixel likelihood.
6 EXPERIMENTAL EVALUATION
We follow the experimental setup of Garnelo et al. (2018b), where images are interpreted as functions that map pixel locations to color values, and image in-painting is framed as an amortized
predictive inference task where the latent image-specific regression function needs to be inferred from a small context set of provided pixel values and locations. For ease of comparison to prior work, we use the same MNIST (LeCun et al., 1998) and CelebA (Liu et al., 2015) datasets. Additionally, we run an experiment on FashionMNIST dataset (Xiao et al., 2017). Specific architecture details for all networks are provided in Appendix A and open-source code for all experiments will be released at the time of publication.
Qualitative Results Fig. 6 shows qualitative image in-painting results for MNIST and CelebA images. Qualitative results for FashionMNIST are shown in Appendix D. It is apparent in all three contexts that ANPs perform poorly when the context set is small, despite the superior sharpness of their reconstructions when given large context sets. The sets of digits and faces that ANPs produce are not sharp, realistic, nor diverse. On the other hand, their predecessor, NP (with mean pooling), arguably exhibits more diversity but suffers at all context sizes in terms of realism of the images. Our NP+SIVI with max pooling approach produces results with two important characteristics: 1) the images generated with a small amount of contextual information are sharper and more realistic; and 2) there is high context-set-compatible variability across the i.i.d. samples. These qualitative results demonstrate that max pooling plus the SIVI objective result in posterior mean functions that are sharper and more appropriately diverse, except in the high context set size regime where diversity does not matter and ANP produces much sharper images. Space limitations prohibit showing large collections of samples where the qualitative differences are even more readily apparent. Appendix L contains more comprehensive examples with greater numbers of samples.
Quantitative Results Quantitatively assessing posterior predictive calibration is an open problem (Salimans et al., 2016; Heusel et al., 2017). Table 1 reports, for the different architectures we consider, predictive held out test-data log-likelihoods averaged over 10,000 MNIST, 10,000 FashionMNIST and 19,962 CelebA test images respectively. While the reported results make it clear that max pooling improves held-out test likelihood, likelihood alone does not provide a direct measure of sample quality nor diversity. It simply measures how much mass is put on each ground-truth completion. It is also important to note that in our implementation of ANP, in contrast to its original paper, the observation variance is fixed and that is why ANP performs poorly in Table 1. An ANP model with learned observation variance outperforms all the other models in terms of held-out test likelihood. However, it is empirically shown that learning the observation variance in NP models with a deterministic path (including ANPs) hurts the diversity of generated samples (Le et al., 2018) (see Appendix C for a detailed discussion and additional results for ANP model with learned variance).
Borrowing from the generative adversarial networks community, who have faced the similar problems of how to quantitatively evaluate models via examination of the samples they generate, we
compute inception scores (Salimans et al., 2016) using conditionally generated samples for different context set sizes for all of the considered NP architectures and report them in Fig. 7. Inception score is the mutual information between the generated images and their class labels predicted by a classifier, in particular, inception network (Szegedy et al., 2016). However, since inception network is an ImageNet (Deng et al., 2009) classifier, it is known to lead to misleading inception scores when applied to other image domains (Barratt & Sharma, 2018). We therefore use trained MNIST, FashionMNIST, and CelebA classifiers in place of inception network (He et al., 2016). (See Appendix H for details.) The images used to create the results in Fig. 7 are the same as in Figs. 6 and 11. For each context set size, the reported inception scores are aggregated over 10 different randomly chosen context sets. The dark gray dashed lines are the inception scores of training samples and represent the maximum one might hope to achieve at a context set size of zero (these plots start at one).
For small context sets, an optimally calibrated model should have high uncertainty and therefore generate samples with high diversity, resulting in high inception scores as observed. As the context set grows, sample diversity should be reduced, resulting in lower scores. Here again, architectures using max pooling produce large gains in inception score in low-context size settings. Whether the addition of SIVI is helpful is less clear here (see Appendix I for a discussion on the addition of SIVI). Nonetheless, the inception score is again only correlated with the qualitative gains we observe in Fig. 6.
7 CONCLUSION
The contributions we report in this paper include suggested neural process architectures (max pooling, no deterministic path) and objectives (regular amortized inference versus the heuristic NP objective, SIVI versus non-mixture variational family) that produce qualitatively better calibrated posteriors, particularly in low context cardinality settings. We provide empirical evidence of how natural posterior contraction may be facilitated by the neural process architecture. Finally, we establish quantitative evidence that shows improvements in neural process posterior predictive performance and highlight the need for better metrics for quantitatively evaluating posterior calibration.
We remind the reader that this work, like most other deep learning work, highlights the impact of varying only a small subset of the dimensions of architecture and objective degrees of freedom. We found that, for instance, simply making ρφ deeper than that reported in the literature improved baseline results substantially. The choice of learning rate also had a large impact on the relative gap between the reported alternatives. We report what we believe to be the most robust configuration across all the configurations that we explored: max pooling and SIVI consistently improve performance. | 1. What is the contribution of the paper to Neural Processes (NPs), and how does it improve predictive likelihoods and diversity of posterior samples?
2. What is the difference between using max-pooling and mean-pooling as aggregation functions in NPs, and how do they impact the performance of the model?
3. How does the use of a mixture distribution with hierarchical VI for the decoder contribute to the improvement in diversity of posterior samples?
4. What are the limitations of the proposed approach, and how could it be improved by combining it with other modern NP architectures such as Attentive NPs or Convolutional NPs?
5. How does the choice of aggregation function impact the estimation of posterior means and variances, and is there an optimal combination of pooling methods that could be used? | Review | Review
The authors propose an extension to Neural Processes (NPs), where they use max-pooling instead of mean-pooling as the aggregation function and use a mixture distribution with hierarchical VI for the decoder. They show that this slightly improves the predictive likelihoods, but crucially strongly improves the diversity of posterior samples when the conditioning set is small, which they argue is an important feature of such models.
Major comments:
What is the actual impact of using the SIVI? In most experiments, it looks like NP+max performs just as well.
What would happen if one would use mean-pooling AND max-pooling and just concatenate the two to yield an aggregated representation? Wouldn't that combine the best of both worlds and the downstream decoder could learn which representation to use for the mean and the variance prediction?
Could these ideas (SIVI, max-pooling) also be combined with more modern NP architectures (like Attentive NPs or Convolutional NPs)?
Minor comments:
It is argued that the max-pooling is naturally better at capturing useful information for estimating the posterior variances. But what about the posterior mean? Shouldn't the mean-pooling be better for that?
In Tab. 1, "NP+max" seems to be the best-performing model. Why is it not shown in Tab. 6?
Summary: I think the focus on the diversity of posterior samples is very interesting and highlights some important property of these kinds of models. However, given the relative simplicity of the proposed extensions, I feel like they are not studied extensively enough. For the paper to provide a clear value for the community, I think it would be good to extend the experiments to cover the whole combination space of {NP, ANP, SIVI} x {mean-pooling, max-pooling, mean+max-pooling}, so that it becomes clearer what the influence of the different design choices are in combination with each other. |
ICLR | Title
Rethinking the Hyperparameters for Fine-tuning
Abstract
Fine-tuning from pre-trained ImageNet models has become the de-facto standard for various computer vision tasks. Current practices for fine-tuning typically involve selecting an ad-hoc choice of hyperparameters and keeping them fixed to values normally used for training from scratch. This paper re-examines several common practices of setting hyperparameters for fine-tuning. Our findings are based on extensive empirical evaluation for fine-tuning on various transfer learning benchmarks. (1) While prior works have thoroughly investigated learning rate and batch size, momentum for fine-tuning is a relatively unexplored parameter. We find that the value of momentum also affects fine-tuning performance and connect it with previous theoretical findings. (2) Optimal hyperparameters for fine-tuning, in particular, the effective learning rate, are not only dataset dependent but also sensitive to the similarity between the source domain and target domain. This is in contrast to hyperparameters for training from scratch. (3) Reference-based regularization that keeps models close to the initial model does not necessarily apply for “dissimilar” datasets. Our findings challenge common practices of finetuning and encourages deep learning practitioners to rethink the hyperparameters for fine-tuning.
1 INTRODUCTION
Many real-world applications often have a limited number of training instances, which makes directly training deep neural networks hard and prone to overfitting. Transfer learning with the knowledge of models learned on a similar task can help to avoid overfitting. Fine-tuning is a simple and effective approach of transfer learning and has become popular for solving new tasks in which pre-trained models are fine-tuned with the target dataset. Specifically, fine-tuning on pre-trained ImageNet classification models (Simonyan & Zisserman, 2015; He et al., 2016b) has achieved impressive results for tasks such as object detection (Ren et al., 2015) and segmentation (He et al., 2017; Chen et al., 2017) and is becoming the de-facto standard of solving computer vision problems. It is believed that the weights learned on the source dataset with a large number of instances provide better initialization for the target task than random initialization. Even when there is enough training data, fine-tuning is still preferred as it often reduces training time significantly (He et al., 2019).
The common practice of fine-tuning is to adopt the default hyperparameters for training large models while using smaller initial learning rate and shorter learning rate schedule. It is believed that adhering to the original hyperparameters for fine-tuning with small learning rate prevents destroying the originally learned knowledge or features. For instance, many studies conduct fine-tuning of ResNets (He et al., 2016b) with these default hyperparameters: learning rate 0.01, momentum 0.9 and weight decay 0.0001. However, the default setting is not necessarily optimal for fine-tuning on other tasks. While few studies have performed extensive hyperparameter search for learning rate and weight decay (Mahajan et al., 2018; Kornblith et al., 2019), the momentum coefficient is rarely changed. Though the effectiveness of the hyperparameters has been studied extensively for training a model from scratch, how to set the hyperparameters for fine-tuning is not yet fully understood.
∗Work done while at Amazon Web Services
In addition to using ad-hoc hyperparameters, commonly held beliefs for fine-tuning also include:
• Fine-tuning pre-trained networks outperforms training from scratch; recent work (He et al., 2019) has already revisited this. • Fine-tuning from similar domains and tasks works better (Ge & Yu, 2017; Cui et al., 2018;
Achille et al., 2019; Ngiam et al., 2018). • Explicit regularization with initial models matters for transfer learning performance (Li
et al., 2018; 2019).
Are these practices or beliefs always valid? From an optimization perspective, the difference between fine-tuning and training from scratch is all about the initialization. However, the loss landscape of the pre-trained model and the fine-tuned solution could be much different, so as their optimization strategies and hyperparameters. Would the hyperparameters for training from scratch still be useful for fine-tuning? In addition, most of the hyperparameters (e.g., batch size, momentum, weight decay) are frozen; will the conclusion differ when some of them are changed?
With these questions in mind, we re-examined the common practices for fine-tuning. We conducted extensive hyperparameter search for fine-tuning on various transfer learning benchmarks with different source models. The goal of our work is not to obtain state-of-the-art performance on each fine-tuning task, but to understand the effectiveness of each hyperparameter for fine-tuning, avoiding unnecessary computation. We explain why certain hyperparameters work so well on certain datasets while fail on others, which can guide hyperparameter search for fine-tuning.
Our main findings are as follows:
• Optimal hyperparameters for fine-tuning are not only dataset dependent, but are also dependent on the similarity between the source and target domains, which is different from training from scratch. Therefore, the common practice of using optimization schedules derived from ImageNet training cannot guarantee good performance. It explains why some tasks are not achieving satisfactory results after fine-tuning because of inappropriate hyperparameter selection. Specifically, as opposed to the common practice of rarely tuning the momentum value beyond 0.9, we find that zero momentum sometimes work better for fine-tuning on tasks that are similar with the source domain, while nonzero momentum works better for target domains that are different from the source domain. • Hyperparameters are coupled together and it is the effective learning rate—which encap-
sulates the learning rate and momentum—that matters for fine-tuning performance. While effective learning rate has been studied for training from scratch, to the best of our knowledge, no previous work investigates effective learning rate for fine-tuning and is less used in practice. Our observation of momentum can be explained as small momentum actually decreases the effective learning rate, which is more suitable for fine-tuning on similar tasks. We show that the optimal effective learning rate depends on the similarity between the source and target domains. • We find regularization methods that were designed to keep models close to the initial
model does not necessarily work for “dissimilar” datasets, especially for nets with Batch Normalization. Simple weight decay can result in as good performance as the referencebased regularization methods for fine-tuning with better search space.
2 RELATED WORK
In transfer learning for image classification, the last layer of a pre-trained network is usually replaced with a randomly initialized fully connected layer with the same size as the number of classes in the target task (Simonyan & Zisserman, 2015). It has been shown that fine-tuning the whole network usually results in better performance than using the network as a static feature extractor (Yosinski et al., 2014; Donahue et al., 2014; Huh et al., 2016; Mormont et al., 2018; Kornblith et al., 2019). Ge & Yu (2017) select images that have similar local features from source domain to jointly fine-tune pre-trained networks. Cui et al. (2018) estimate domain similarity with ImageNet and demonstrate that transfer learning benefits from pre-training on a similar source domain. Besides image classification, many object detection frameworks also rely on fine-tuning to improve over training from scratch (Girshick et al., 2014; Ren et al., 2015).
Many researchers re-examined whether fine-tuning is a necessity for obtaining good performance. Ngiam et al. (2018) find that when domains are mismatched, the effectiveness of transfer learning is negative, even when domains are intuitively similar. Kornblith et al. (2019) examine the fine-tuning performance of various ImageNet models and find a strong correlation between ImageNet top-1 accuracy and the transfer accuracy. They also find that pre-training on ImageNet provides minimal benefits for some fine-grained object classification dataset. He et al. (2019) questioned whether ImageNet pre-training is necessary for training object detectors. They find the solution of training from scratch is no worse than the fine-tuning counterpart as long as the target dataset is large enough. Raghu et al. (2019) find that transfer learning has negligible performance boost on medical imaging applications, but speed up the convergence significantly.
There are many literatures on hyperparameter selection for training neural networks from scratch, mostly on batch size, learning rate and weight decay (Goyal et al., 2017; Smith et al., 2018; Smith & Topin, 2019). There are few works on the selection of momentum (Sutskever et al., 2013). Zhang & Mitliagkas (2017) proposed an automatic tuner for momentum and learning rate in SGD. There are also studies on the correlations of the hyperparameters, such as linear scaling rule between batch size and learning (Goyal et al., 2017; Smith et al., 2018; Smith, 2017). However, most of these advances on hyperparameter tuning are designed for training from scratch and have not examined on fine-tuning tasks for computer vision problems. Most work on fine-tuning simply choose fixed hyperparameters (Cui et al., 2018) or use dataset-dependent learning rates (Li et al., 2018) in their experiments. Due to the huge computational cost for hyperparameter search, only a few works (Kornblith et al., 2019; Mahajan et al., 2018) performed large-scale grid search of learning rate and weight decay for obtaining the best performance.
3 TUNING HYPERPARAMETERS FOR FINE-TUNING
In this section, we first introduce the notations and experimental settings, and then present our observations on momentum, effective learning rate and regularization. The fine-tuning process is not different from learning from scratch except for the weights initialization. The goal of the process is still to minimize the objective function L = 1N ∑N i=1 `(f(xi, θ), yi) + λ 2 ‖θ‖ 2 2, where ` is the loss function, N is the number of samples, xi is the input data, yi is its label, f is the neural network, θ is the model parameters and λ is the regularization hyperparameter or weight decay. Momentum is widely used for accelerating and smoothing the convergence of SGD by accumulating a velocity vector in the direction of persistent loss reduction (Polyak, 1964; Sutskever et al., 2013; Goh, 2017). The commonly used Nesterov’s Accelerated Gradient (Nesterov, 1983) is given by:
vt+1 = mvt − ηt 1
n n∑ i=1 ∇`(f(xi, θt +mvt), yi) (1)
θt+1 = θt + vt+1 − ηλθt (2) where θt indicates the model parameters at iteration t. The hyperparameters include the learning rate ηt, batch size n, momentum coefficient m ∈ [0, 1), and the weight decay λ.
3.1 EXPERIMENTAL SETTINGS
We evaluate fine-tuning on seven widely used image classification datasets, which covers tasks for fine-grained object recognition, scene recognition and general object recognition. Detailed statistics of each dataset can be seen in Table 1. We use ImageNet (Russakovsky et al., 2015), Places-365 (Zhou et al., 2018) and iNaturalist (Van Horn et al., 2018) as source domains for pre-trained models. We resize the input images such that the aspect ratio is preserved and the shorter side is 256 pixels. The images are normalized with mean and std values calculated over ImageNet. For data augmentation, we adopt the common practices used for training ImageNet models (Szegedy et al., 2015): random mirror, random scaled cropping with scale and aspect variations, and color jittering. The augmented images are resized to 224×224. Note that state-of-the-art results could achieve even better performance by using higher resolution images (Cui et al., 2018) or better data augmentation (Cubuk et al., 2018).
We mainly use ResNet-101-V2 (He et al., 2016a) as our base network, which is pre-trained on ImageNet (Russakovsky et al., 2015). Similar observations are also made on DenseNets (Huang et al., 2017) and MobileNet (Howard et al., 2017). The hyperparameters to be tuned (and ranges)
are: learning rate (0.1, 0.05, 0.01, 0.005, 0.001, 0.0001), momentum (0.9, 0.99, 0.95, 0.9, 0.8, 0.0) and weight decay (0.0, 0.0001, 0.0005, 0.001). We set the default hyperparameters to be batch size 2561, learning rate 0.01, momentum 0.9 and weight decay 0.0001. To avoid insufficient training and observe the complete convergence behavior, we use 300 epochs for fine-tuning and 600 epochs for scratch-training, which is long enough for the training curves to converge. The learning rate is decayed by a factor of 0.1 at epoch 150 and 250. We report the Top-1 validation (test) error at the end of training. The total computation time for the experiments is more than 10K GPU hours.
3.2 EFFECT OF MOMENTUM AND DOMAIN SIMILARITY
Momentum 0.9 is the most widely used value for training from scratch (Krizhevsky et al., 2012; Simonyan & Zisserman, 2015; He et al., 2016b) and is also widely adopted for fine-tuning (Kornblith et al., 2019). To the best of our knowledge, it is rarely changed, regardless of the network architectures or target tasks. To check the influence of momentum on fine-tuning, we first search for the best momentum value for fine-tuning on the Birds dataset with different weight decay and learning rate. Figure 1(a) shows the performance of fine-tuning with and without weight decays. Surprisingly, momentum zero actually outperforms the nonzero momentum. The optimal learning rate also increases when the momentum is disabled as shown in Figure 1(b).
To verify this observation, we further compare momentum 0.9 and 0.0 on other datasets. Table 2 shows the performance of 8 hyperparameter settings on 7 datasets. We observe a clear pattern that disabling momentum works better for Dogs, Caltech and Indoor, while momentum 0.9 works better for Cars, Aircrafts and Flowers.
1 For each training job with ResNet-101 and batch size 256, we use 8 NVIDIA Tesla V100 GPUs for synchronous training, where each GPU uses a batch of 32 and no SyncBN is used.
Interestingly, datasets such as Dogs, Caltech, Indoor and Birds are known to have high overlap with ImageNet dataset2, while Cars and Aircrafts are identified to be difficult to benefit from fine-tuning from pre-trained ImageNet models (Kornblith et al., 2019). According to Cui et al. (2018), in which the Earth Mover’s Distance (EMD) is used to calculate the similarity between ImageNet and other domains, the similarity to Dogs and Birds are 0.619 and 0.563, while the similarity to Cars, Aircrafts and Flowers are 0.560, 0.556 and 0.5253. The relative order of similarities to ImageNet is
Dogs, Birds, Cars, Aircrafts and Flowers
which aligns well with the transition of optimal momentum value from 0.0 to 0.9. Following the similarity calculation, we can also verified Caltech and Indoor are more close to ImageNet than Cars/Aircrafts/Flowers (Table 3.3).
To verify the connection between momentum and domain similarity, we further fine-tune from different source domains such as Places-365 and iNaturalist, which are known to be better source domains than ImageNet for fine-tuning on Indoor and Birds dataset (Cui et al., 2018). We may expect that fine-tuning from iNaturalist works better for Birds with m = 0 and similarly, Places for Indoor. Indeed, as shown in Table 3, disabling momentum improves the performance when the source and target domain are similar, such as Places for Indoor and iNaturalist for Birds.
Small momentum works better for fine-tuning on domains that are close to the source domain One explanation for the above observations is that because the Dogs dataset is very close to ImageNet, the pre-trained ImageNet model is expected to be close to the fine-tuned solution on the Dogs dataset. In this case, momentum may not help much as the gradient direction around the minimum could be much random and accumulating the momentum direction could be meaningless. Whereas, for
2Stanford Dogs (Khosla et al., 2011) was built using images and annotation from ImageNet for the task of fine-grained image categorization. Caltech-256 has at least 200 categories exist in ImageNet (Deng et al., 2010). Images in the CUB-Birds dataset overlap with images in ImageNet.
3The domain similarity calucation is discussed in Appendix B and the exact value can be found in Table 3.3.
faraway target domains (e.g., Cars and Aircrafts) where the pre-trained ImageNet model could be much different with the fine-tuned solution, the fine-tuning process is more similar with training from scratch, where large momentum stabilizes the decent directions towards the minimum. An illustration of the difference can be found in Figure 2.
Connections to early observations on decreasing momentum Early work (Sutskever et al., 2013) actually pointed out that reducing momentum during the final stage of training allows finer convergence while aggressive momentum would prevent this. They recommended reducing momentum from 0.99 to 0.9 in the last 1000 parameter updates but not disabling it completely. Recent work (Liu et al., 2018; Smith, 2018) showed that a large momentum helps escape from saddle points but can hurt the final convergence within the neighborhood of the optima, implying that momentum should be reduced at the end of training. Liu et al. (2018) find that a larger momentum introduces higher variance of noise and encourages more exploration at the beginning of optimization, and encourages more aggressive exploitation at the end of training. They suggest that at the final stage of the step size annealing, momentum SGD should use a much smaller step size than that of vanilla SGD. When applied to fine-tuning, we can interpret that if the pre-trained model lies in the neighborhood of the optimal solution on the target dataset, the momentum should be small. Our work identifies the empirical evidence of disabling momentum helps final convergence, and fine-tuning on close domains is a good exemplar.
3.3 COUPLED HYPERPARAMETERS AND THE VIEW OF EFFECTIVE LEARNING RATE
Now that we had discovered the effect of momentum by fixing other hyperparameters and only allowed momentum to change. But note that the two difficult scenarios shown in Figure 2 (b) and (c) might also be mitigated by increasing or decreasing learning rate. That is, hyperparameters are coupled and varying one hyperparameter can change the optimal values of the other hyperparameters that lead to the best performance. In addition, optimal values of certain hyperparameters depend on the values of other hyperparameters in systematic ways. For example, learning rate is entangled with batch size, momentum and weight decay. There is a notion of effective learning rate (ELR) (Hertz et al., 1991; Smith et al., 2018; Smith & Le, 2018) for SGD with momentum: η′ = η/(1−m), which was shown to be more closely related with training dynamics and final performance rather than η. The effective learning rate with m = 0.9 is 10× higher than the one with m = 0.0 if other hyperparameters are fixed, which is probably why we see an increase in optimal learning rate when momentum is disabled in Figure 1(b) and Appendix A.
It is the effective learning rate that matters for fine-tuning performance Because hyperparameters are coupled, looking at the performance with only one hyperparameter varied may give a
misleading understanding of the effect of hyperparameters. Therefore, to examine the effect of momentum, we should report the best result obtainable with and without momentum, as long as other hyperparameters explored are sufficiently explored. We re-examine previous experiments that demonstrated the importance of momentum tuning when the ELR η′ = η/(1 − m) is held fixed instead of simply fixing learning rate η. Figure 3 shows that when η′ is constant, the best performance obtained by m = 0.9 and m = 0 are almost equivalent when other hyperparameters are allowed to change. However, different ELR does result in different performance, which indicates its importance for the best performance. It explains why the common practice of changing only learning rate generally works, though changing momentum may results in the same result, they both change the ELR. In fact, as long as the initial learning rate is small enough, we can always search for the optimal momentum as it is an amplifier, making the ELR larger by a factor of 1/(1−m). Therefore, momentum does determine the search ranges of learning rate.
Optimal ELR depends on the similarity between source domain and target domain Now that we have shown ELR is critical for fine-tuning performance, we are interested in the factors that determine the optimal ELR for a given task. Previous work (Smith & Le, 2018) found that there is an optimum ELR which maximizes the test accuracy. However, the observations are only based on scratch training on small datasets (e.g., CIFAR-10); the relationship between ELR and domain similarity, especially for fine-tuning, is still unexplored. To examine this, we search the best ELR on each fine-tuning task and reports in Fig. 4 the best validation error obtained by each ELR while allowing other hyperparameters to change. It shows the optimal ELR depends on both source domain and target domain. As shown in Fig. 4 (a-c), the optimal ELR for Dogs/Caltech/Indoor are much smaller than these for Aircrafts/Flowers/Cars when fine-tuned from ImageNet pre-trained model. Similar observations can be made on DenseNets and MobileNet. Though the optimal ELR value is different, the relative order of domain similarity is consistent and architecture agnostic. We can also see a smaller ELR works better when source domain and target domain are similar, such as Dogs for ImageNet and Birds for iNat2017 (Fig. 4 (a, d-e)). Interestingly, the optimal ELR for training from scratch is much larger and very similar across different target datasets, which indicates the distance from a random initialization is uniformly similar to different target dataset.
10 4 10 3 10 2 10 1 100
10 5 10 4 10 3 10 2 10 1 100
Optimal ELR selection based on domain similarity Now we have made qualitative observations about the relationship between domain similarity and optimal ELR. A quantitative characterization of the relationship could reduce the hyperparameter search ranges for HPO or even eliminate HPO by accurately predicting hyperparameters. We followed the domain similarity calculation in (Cui et al., 2018) and recalculate similarity scores for all source-target domain pairs. Note the original domain similarity calculation in (Cui et al., 2018) use pre-trained JFT (Sun et al., 2017) models as feature extractor, which are not public available. We alternatively use ImageNet pre-trained model or the source model as feature extractor. As shown in Table 4, there is a good correlation between domain similarity score and the scale of optimal ELR. Generally, the more similar the two domains, the smaller the optimal ELR. Though it is not strictly corresponding to the domain similarity score, the score provides reasonable prediction about the scale of optimal ELR, such as [0.001, 0.01], [0.01, 0.1], [0.1, 1.0] and therefore can reduce the search space for optimal ELR. Based on this correlation, a simple strategy can be developed for optimal ELR selection given a frequently used source model: one can calculate domain similarities and perform exhaustive hyperparameter searches for few reference datasets, including similar and dissimilar datasets. Then given a new dataset to fine-tune, one can calculate the domain similarity and compare with the scores of reference datasets, and choose the range of ELRs with the closest domain similarity.
Weight Decay and Learning Rate The relationship between weight decay and effective learning rate is recently well-studied (van Laarhoven, 2017; Zhang et al., 2018; Loshchilov & Hutter, 2018). It was shown that the effect of weight decay on models with BN layers is equivalent to increasing the ELR by shrinking the weights scales, i.e., η′ ∼ η/‖θ‖22. And if the optimal effective learning rate exists, the optimal weight decay value λ is inversely related with the optimal learning rate η. The ‘effective’ weight decay is λ′ = λ/η. We show in Figure 5 that the optimal effective weight decay is also correlated with domain similarity.
10 3 10 2 10 1 100 101 102
3.4 THE CHOICE OF REGULARIZATION
L2 regularization or weight decay is widely used for constraining the model capacity (Hanson & Pratt, 1989; Krogh & Hertz, 1992). Recently Li et al. (2018; 2019) pointed out that standard L2 regularization, which drives the parameters towards the origin, is not adequate in transfer learning. To retain the knowledge learned by the pre-trained model, reference-based regularization was used to regularize the distance between fine-tuned weights and the pre-trained weights, so that the finetuned model is not too different from the initial model. Li et al. (2018) propose L2-SP norm, i.e., λ12 ‖θ ′ − θ0‖22 + λ22 ‖θ ′′‖22, where θ′ refers to the part of network that shared with the source network, and θ′′ refers to the novel part, e.g., the last layer with different number of neurons. While the motivation is intuitive, there are several issues for adopting reference based regularization for fine-tuning:
• Many applications actually adopt fine-tuning on target domains that are quite different from source domain, such as fine-tuning ImageNet models for medical imaging (Mormont et al., 2018; Raghu et al., 2019). The fine-tuned model does not necessarily have to be close with the initial model.
• The scale invariance introduced by Batch Normalization (BN) (Ioffe & Szegedy, 2015) layers enable models with different parameter scales to function the same, i.e., f(θ) = f(αθ). Therefore, when L2 regularization drives ‖θ‖22 towards zeros, it could still have the same functionality as the initial model. On the contrary, a model could still be different even when the L2-SP norm is small.
• L2-SP regularization would constrain θ′′ to be close to θ0, so that ‖θ‖22 is relatively stable in comparison with L2 regularization. Given that ELR is approximately proportional to η/‖θ‖22 and a smaller ELR is beneficial for fine-tuning from similar domains, it may explain why L2-SP provides better performance. If this is true, then by decreasing the initial ELR, L2-norm may function the same.
To examine these conjectures, we revisited the work of (Li et al., 2018) with additional experiments. To show the effectiveness of L2-SP norm, the authors conducted experiments on datasets such as Dogs, Caltech and Indoor, which are all close to the source domain (ImageNet or Places-365). We extend their experiments by fine-tuning on both “similar” and “dissimilar” datasets, including Birds, Cars, Aircrafts and Flowers, with both L2 and L2-SP regularization (details in Appendix D). For fair comparison, we perform the same hyperparameter search for both methods. As expected, Table 5 shows that L2 regularization is very competitive with L2-SP on Birds, Cars, Aircrafts and Flowers, which indicates that reference based regularization may not generalize well for fine-tuning on dissimilar domains.
We also check the change of regularization terms during training for both methods as well as their best hyperparameters. As shown in Figure 6, the L2 regularization usually decrease the weights norm more aggressively, depending on the value of λ, while L2-SP regularization keeps the norm less changed. We can see that the optimal learning rate of L2 regularization is mostly smaller than L2-SP, which may compensate for the decreased weight norm or increased ELR. Interestingly, for Dogs dataset, both regularization terms grow much larger after a few iterations and then become stable, which means constraining the weights to be close to initialization is not necessarily the reason for L2-SP to work even for close domains. It also seems contradicting to previous finding (Zhang et al., 2018) that weight decay functions as increasing ELR by decreasing weight norms. However, it
might be reasonable as large norm actually decreases the ELR, which could be helpful due to the close domain similarity between Dogs and ImageNet.
4 DISCUSSION
The two extreme ways for selecting hyperparameters—performing exhaustive hyperparameter search or taking ad-hoc hyperparameters from scratch training—could be either too computationally expensive or yield inferior performance. Different from training from scratch, where the default hyperparameter setting may work well for random initialization, the choice of hyperparameters for fine-tuning is not only dataset dependent but is also influenced by the similarity between the target and source domains. The rarely tuned momentum value could also improve or impede the performance when the target domain and source domain are close given insufficiently searched learning rate. These observations connect with previous theoretical works on decreasing momentum at the end of training and effective learning rate. We further identify that the optimal effective learning rate correlates with the similarity between the source and target domains. With this understanding, one can significantly reduce the hyperparameter search space. We hope these findings could be one step towards better and efficient hyperparameter selection for fine-tuning.
ACKNOWLEDGMENTS
The authors would like to thank all anonymous reviewers for their valuable feedback.
A THE EFFECTIVENESS OF MOMENTUM
Searching for Optimal Momentum To check the effectiveness of momentum on fine-tuning, we can search the best momentum values for fine-tuning with fixed learning rate but different weight decay and batch size. Taking Birds dataset as an example, Figure 7 provides the convergence curves for the results shown in Figure 1(a), which shows the learning curves of fine-tuning with 6 different batch sizes and weight decay combinations. Zero momentum outperforms the nonzero momentum in 5 out of 6 configurations.
Effective learning rate increases after disabling momentum. Figure 8 compares the performance of with and without momentum for Dogs dataset with a range of different learning rates. Note that the learning rate with similar performance generally increases 10x after changing m from 0.9 to 0.0, which is coherent with the rule of effective learning rate η′ = η/(1−m). Same observations can be made on other datasets as shown in Figure 9.
0 50 100 150 200 250 300 Epochs
0
10
20
30
40
50
60
Er ro
r
resnet101_v2, n = 256, m = 0.9, = 0.0001
= 0.1, 20.69 = 0.05, 19.07 = 0.01, 14.85 = 0.005, 13.42 = 0.001, 12.07 = 0.0005, 11.64 = 0.0001, 14.70
(a) Caltech, m = 0.9 0 50 100 150 200 250 300 Epochs
0
10
20
30
40
50
60
Er ro
r
resnet101_v2, n = 256, m = 0.0, = 0.0001
= 0.1, 14.67 = 0.05, 13.29 = 0.01, 12.11 = 0.005, 11.86 = 0.001, 14.62 = 0.0005, 19.39 = 0.0001, 81.26
(b) Caltech, m = 0.0 0 50 100 150 200 250 300 Epochs
0
10
20
30
40
50
60
Er ro
r
resnet101_v2, n = 256, m = 0.9, = 0.0001
= 0.1, 27.29 = 0.05, 25.64 = 0.01, 23.76 = 0.005, 24.59 = 0.001, 22.34 = 0.0005, 21.29 = 0.0001, 29.39
(c) Indoor, m = 0.9 0 50 100 150 200 250 300 Epochs
0
10
20
30
40
50
60
Er ro
r
resnet101_v2, n = 256, m = 0.0, = 0.0001
= 0.1, 23.46 = 0.05, 22.04 = 0.01, 21.14 = 0.005, 21.96 = 0.001, 29.69 = 0.0005, 41.00 = 0.0001, 88.23
(d) Indoor, m = 0.0
B DOMAIN SIMILARITY
The domain similarity calculation based on Earth Mover Distance (EMD) is introduced in the section 4.1 of (Cui et al., 2018)4. Here we briefly introduce the steps. In (Cui et al., 2018), the authors first train ResNet-101 on the large scale JFT dataset (Sun et al., 2017) and use it as a feature extractor. They extracted features from the penultimate layer of the model for each image of the training set of the source domain and target domain. For ResNet-101, the length of the feature vector is 2048. The features of images belonging to the same category are averaged and g(si) denotes the average feature vector of ith label in source domain S, similarly, g(tj) denotes the average feature vector of jth label in target domain T . The distance between the averaged features of two labels is di,j = ‖g(si) − g(tj)‖. Each label is associated with a weight w ∈ [0, 1] corresponding to the percentage of images with this label in the dataset. So the source domain S with m labels and the target domain T with n labels can be represented as S = {(si, wsi)}mi=1 and T = {(tj , wtj )}ni=1. The EMD between the two domains is defined as
d(S, T ) = EMD(S, T ) = ∑m,n i=1,j=1 fi,jdi,j∑m,n i=1,j=1 fi,j
(3)
where the optimal flow fi,j corresponds to the least amount of total work by solving the EMD optimization problem. The domain similarity is defined as
sim(S, T ) = e−γd(S,T ) (4)
where γ is 0.01. Note that the domain similarity value is not ranging from 0 to 1.
Due to the unavailability of the large-scale JFT dataset (300x larger than ImageNet) and its pre-trained ResNet-101 model, we cannot use it for extracting features for new datasets, such as Caltech256 and
4The extracted features and code are available in https://github.com/richardaecn/ cvpr18-inaturalist-transfer
MIT67-Indoor. Instead of using the powerful feature representation, we use our pre-trained ImageNet model (ResNet-101) as the feature extractor. Table 4 compares the domain similarities calculated by different pre-trained models and we can see some consistent patterns across different architectures: e.g., The 1st and 2nd highest similarity scores are Caltech and Dogs regardless of architectures; the 3rd and 4th highest similarity scores refers to Birds and Indoor; the most dissimilar datasets are Cars, Aircrafts and Flowers, though the relative orders for them are not exactly the same. Besides using fixed feature extractor, an alternative way is to use the source domain model directly as the feature extractor for both source domain and target domain, which may captures the transfer learning process more precisely than a uniform feature extractor.
C THE EFFECTIVENESS OF BN MOMENTUM
Kornblith et al. (2019) conducted extensive fine-tuning experiments with different hyperparameters. One observation they made is that the momentum parameter of BN layer is essential for finetuning. They found it useful to decrease the BN momentum parameter from its ImageNet value to max(1− 10/s, 0.9) where s is the number of steps per epoch. This will change the default BN momentum value (0.9) when s is larger than 100, but it only applies when the dataset size is larger than 25.6K with batch size 256. The maximum data size used in our experiments is Caltech-256, which is 15K, so this strategy seems not applicable.
We further validate the effect of BN momentum by performing a similar study as to ELR. The goal is to identify whether there is an optimal BN momentum for a given task. For each dataset, we fine-tune the pre-trained model using previously obtained best hyperparameters and only vary BN momentum. In addition to the default value 0.9, we also set it to 0.0, 0.95 and 0.99. The rational is that if BN mommentum is a critical hyperparameter, we should expect significant performance differences when the value is changed from the optimal value. As shown in Figure 10, we can see mbn = 0.99 slightly improves the performance for some datasets, however, there is no significant performance difference among values greater than 0.9. One hypothesis is that similar domains will share similar BN parameters and statistics, BN momentum may affect the parameter adaptation. More investigation is still needed to fully understand its effectiveness.
D EXPERIMENTAL SETTINGS FOR COMPARISON OF L2 AND L2-SP
The experiments in Section 3.4 is based the code5 provided by (Li et al., 2018). The base network is ImageNet pretrained ResNet-101-V1. The model is fine-tuned with batch size 64 for 9000 iterations, and learning rate is decayed once at iteration 6000. Following the original setting, we use momentum 0.9. We performed grid search on learning rate and weight decay, with the range of η : {0.02, 0.01, 0.005, 0.001, 0.0001} and λ1 : {0.1, 0.01, 0.001, 0.0001}, and report the best average class error (1 - average accuracy) for both methods. For L2-SP norm, we follow the authors’ setting to use constant λ2 = 0.01. Different with the original setting for L2 regularization, we set λ2 = λ1 to simulate normal L2-norm.
5 https://github.com/holyseven/TransferLearningClassification
E DATA AUGMENTATION
Data augmentation is an important way of increasing data quantity and diversity to make models more robust. It is even critical for transfer learning with few instances. The effect of data augmentation can be viewed as a regularization and the choice of data augmentation can be also viewed as a hyperparameter. Most current widely used data augmentation methods have verified their effectiveness on training ImageNet models, such as random mirror flipping, random rescaled cropping6, color jittering and etc (Szegedy et al., 2015; Xie et al., 2018).
Do these methods transfer for fine-tuning on other datasets? Here we compare three settings for data augmentation with different momentum settings: 1) random resized cropping: our default data augmentation; 2) random cropping: the same as standard data augmentation except that we use random cropping with fixed size; 3) random flip: simply random horizontal flipping. The training and validation errors of fine-tuning with different data augmentation strategies and hyperparameters are shown in Figure 11 and Figure 12.
The effect of data augmentation is dataset dependent and is also influenced by other hyperparameters The first row in Figure 11 shows that advanced data augmentation with default hyperparameters (m = 0.9 and η = 0.01) leads to overfitting for Dogs while generalize better on Aircrafts and Flowers. Similar observations can be made in Figure 12. However, when momentum is disabled, the overfitting disappears for Dogs and Caltech. This is explainable since random resized cropping adds more variance to the gradient direction, and disabling momentum will lead to a smaller ELR which will be helpful for fine-tuning from a similar domain. On the other hand, the performance of random cropping decreases when momentum is disabled. As random cropping produces training samples with less variation than random resized cropping, disabling momentum or decreasing the ELR might lead to underfitting or stucking in poor local minima. This can be mitigated as we increase the learning rate for random cropping, which adds variation to the gradients. As shown in Table 6,
6Randomly crop a rectangular region with aspect ratio randomly sampled in [3/4, 4/3] and area randomly sampled in [8%, 100%] (Szegedy et al., 2015)
0 50 100 150 200 250 300 Epochs
0
5
10
15
20
25
30
35 40 Er ro r
resnet101_v2, = 0.01, n = 256, m = 0.9, = 0.0001
rand resized crop, 14.85 rand crop, 12.42 rand flip, 12.34
(a) Caltech, m = 0.9 0 50 100 150 200 250 300 Epochs
0
5
10
15
20
25
30
35
40
Er ro
r
resnet101_v2, = 0.01, n = 256, m = 0.0, = 0.0001
rand resized crop, 12.11 rand crop, 12.89
(b) Caltech, m = 0.0 0 50 100 150 200 250 300 Epochs
0
5
10
15
20
25
30
35
40
Er ro
r
resnet101_v2, = 0.01, n = 256, m = 0.9, = 0.0001
rand resized crop, 23.76 rand crop, 23.39 rand flip, 23.31
(c) Indoor, m = 0.9 0 50 100 150 200 250 300 Epochs
0
5
10
15
20
25
30
35
40
Er ro
r
resnet101_v2, = 0.01, n = 256, m = 0.0, = 0.0001
rand resized crop, 21.14 rand crop, 25.19
(d) Indoor, m = 0.0
when learning rate increased fro 0.01 to 0.05, disabling momentum shows better performance than nonzero momentum on datasets that are close, similar to previous findings with random resized cropping.
F SOURCE DOMAINS
Pre-trained models For most of our experiments, we use the pre-trained ResNet-101_v2 model from the model zoo of MXNet GluonCV 7. To get the pre-trained models for iNat-2017 and Places365, we fine-tune from the ImageNet pre-trained model with the default fine-tuning hyperparameters for 60 epochs, where learning rate is decayed at epoch 45 by a factor of 10. Table 7 illustrates the Top-1 errors of each pre-trained model on their validation sets.
Training from Scratch with HPO The default hyperparameters for training from scratch are η = 0.1, λ = 0.0001, m = 0.9 and n = 256. We train 600 epochs, and decay the learning rate at epoch 400 and 550 by a factor of 10. To perform Hyperparameter Optimization (HPO), we search hyperparameters in the following space: η ∈ [0.1, 0.2, 0.5] and λ ∈ [0.0001, 0.0005]. Figure 13 shows the training/validation errors of training from scratch on each dataset with different learning rate and weight decay. We observe weight decay 0.0005 consistently performs better than 0.0001.
Insufficient hyperparameter search may lead to miss-leading conclusion To show the importance of hyperparameter tuning, Table 8 compares the performance with and without hyperparameter tuning for both fine-tuning and training from scratch tasks. With the default hyperparameters, some inappropriate conclusions might be made, e.g., “there is significant gap between fine-tuning and training from scratch", “fine-tuning always surpass training from scratch" or “fine-tuning from iNat cannot beat the performance of ImageNet". However, with HPO, those statements may not be valid. For example, training from scratch surpass the default fine-tuning result on Cars and Aircrafts and the gap between fine-tuning and training from scratch is much smaller. Previous studies (Kornblith et al., 2019; Cui et al., 2018) also identified that datasets like Cars and Aircrafts do not benefit too much from fine-tuning. | 1. What are the strengths and weaknesses of the paper regarding its contributions, experiments, and comparisons with other works?
2. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
3. What are the questions raised by the reviewer regarding the paper's methodology, results, and conclusions?
4. How does the reviewer evaluate the significance and impact of the paper's findings in the context of transfer learning and fine-tuning?
5. What suggestions or recommendations does the reviewer have for improving the paper and enhancing its contributions to the field? | Review | Review
This submission studies the problem of transfer learning and fine tuning. This submission proposes four insights: Momentum hyperparameters are essential for fine-tuning; When the hyperparameters satisfy some certain relationships, the results of fine-tuning are optimal; The similarity between source and target datasets influences the optimal choice of the hyperparameters; Existing regularization methods for DNN is not effective when the datasets are dissimilar. This submission provides multiple experiments to support their opinion.
Pros:
+ This submission provides interesting facts that are omitted in previous research works.
+ This submission examines the previous theoretical results in empirical setting and finds some optimal hyperparameter selection strategies.
+ This submission provides many experiment results of fine-tuning along with its choice of hyperparameters that could be taken as baselines in future researches.
Cons:
- All experiments results are based on same backbone, which makes all discoveries much less reliable. More experiments on other backbones are necessary. Furthermore, this submission claims that the regularization methods such as L2-SP may not work on networks with Batch Normalization module. But there is no comparison on networks without BN.
- Providing a complete hyperparameter selecting strategy for fine-tuning could be an important contribution of this submission. I suggest authors to think about it.
- This submission claim that the choice of hyperparameters should depend on similarity of different domains. But this submission does not propose a proper method for measure the similarity or provide detailed experiments on previous measurements.
- It seems that the MITIndoors Dataset is not similar with ImageNet from the semantic view. This submission does not provide similarity measurement between these datasets. Why the optimal momentum is 0?
- The effective learning rate and ‘effective’ weight decay are not first given in this submission. This makes the novelty of this submission relatively weak. Authors only test these strategies in fine-tuning setting and find that they also work with a different initialization.
- It seems that merely searching for learning rate and weight decay hyperparameters (as Kornblith et al. (2018) did) on a fixed momentum is Ok if there is a most effective relationship between learning rate and momentum. So the discoveries in the first part that a 0 momentum can be better is based on a careless search of learning rates?
- This submission omits that Kornblith et al. (2018) also referred to the fact that the momentum parameter of BN is essential for fine-tuning and provided a strategy in section A.5. Discussion about this strategy will make this submission more complete.
This submission gives important discoveries about the hyperparameter choice in the fine-tuning setting. But there are several flaws in this submission. I vote for rejecting this submission now but I expect authors to improve the submission in the future version. |
ICLR | Title
Rethinking the Hyperparameters for Fine-tuning
Abstract
Fine-tuning from pre-trained ImageNet models has become the de-facto standard for various computer vision tasks. Current practices for fine-tuning typically involve selecting an ad-hoc choice of hyperparameters and keeping them fixed to values normally used for training from scratch. This paper re-examines several common practices of setting hyperparameters for fine-tuning. Our findings are based on extensive empirical evaluation for fine-tuning on various transfer learning benchmarks. (1) While prior works have thoroughly investigated learning rate and batch size, momentum for fine-tuning is a relatively unexplored parameter. We find that the value of momentum also affects fine-tuning performance and connect it with previous theoretical findings. (2) Optimal hyperparameters for fine-tuning, in particular, the effective learning rate, are not only dataset dependent but also sensitive to the similarity between the source domain and target domain. This is in contrast to hyperparameters for training from scratch. (3) Reference-based regularization that keeps models close to the initial model does not necessarily apply for “dissimilar” datasets. Our findings challenge common practices of finetuning and encourages deep learning practitioners to rethink the hyperparameters for fine-tuning.
1 INTRODUCTION
Many real-world applications often have a limited number of training instances, which makes directly training deep neural networks hard and prone to overfitting. Transfer learning with the knowledge of models learned on a similar task can help to avoid overfitting. Fine-tuning is a simple and effective approach of transfer learning and has become popular for solving new tasks in which pre-trained models are fine-tuned with the target dataset. Specifically, fine-tuning on pre-trained ImageNet classification models (Simonyan & Zisserman, 2015; He et al., 2016b) has achieved impressive results for tasks such as object detection (Ren et al., 2015) and segmentation (He et al., 2017; Chen et al., 2017) and is becoming the de-facto standard of solving computer vision problems. It is believed that the weights learned on the source dataset with a large number of instances provide better initialization for the target task than random initialization. Even when there is enough training data, fine-tuning is still preferred as it often reduces training time significantly (He et al., 2019).
The common practice of fine-tuning is to adopt the default hyperparameters for training large models while using smaller initial learning rate and shorter learning rate schedule. It is believed that adhering to the original hyperparameters for fine-tuning with small learning rate prevents destroying the originally learned knowledge or features. For instance, many studies conduct fine-tuning of ResNets (He et al., 2016b) with these default hyperparameters: learning rate 0.01, momentum 0.9 and weight decay 0.0001. However, the default setting is not necessarily optimal for fine-tuning on other tasks. While few studies have performed extensive hyperparameter search for learning rate and weight decay (Mahajan et al., 2018; Kornblith et al., 2019), the momentum coefficient is rarely changed. Though the effectiveness of the hyperparameters has been studied extensively for training a model from scratch, how to set the hyperparameters for fine-tuning is not yet fully understood.
∗Work done while at Amazon Web Services
In addition to using ad-hoc hyperparameters, commonly held beliefs for fine-tuning also include:
• Fine-tuning pre-trained networks outperforms training from scratch; recent work (He et al., 2019) has already revisited this. • Fine-tuning from similar domains and tasks works better (Ge & Yu, 2017; Cui et al., 2018;
Achille et al., 2019; Ngiam et al., 2018). • Explicit regularization with initial models matters for transfer learning performance (Li
et al., 2018; 2019).
Are these practices or beliefs always valid? From an optimization perspective, the difference between fine-tuning and training from scratch is all about the initialization. However, the loss landscape of the pre-trained model and the fine-tuned solution could be much different, so as their optimization strategies and hyperparameters. Would the hyperparameters for training from scratch still be useful for fine-tuning? In addition, most of the hyperparameters (e.g., batch size, momentum, weight decay) are frozen; will the conclusion differ when some of them are changed?
With these questions in mind, we re-examined the common practices for fine-tuning. We conducted extensive hyperparameter search for fine-tuning on various transfer learning benchmarks with different source models. The goal of our work is not to obtain state-of-the-art performance on each fine-tuning task, but to understand the effectiveness of each hyperparameter for fine-tuning, avoiding unnecessary computation. We explain why certain hyperparameters work so well on certain datasets while fail on others, which can guide hyperparameter search for fine-tuning.
Our main findings are as follows:
• Optimal hyperparameters for fine-tuning are not only dataset dependent, but are also dependent on the similarity between the source and target domains, which is different from training from scratch. Therefore, the common practice of using optimization schedules derived from ImageNet training cannot guarantee good performance. It explains why some tasks are not achieving satisfactory results after fine-tuning because of inappropriate hyperparameter selection. Specifically, as opposed to the common practice of rarely tuning the momentum value beyond 0.9, we find that zero momentum sometimes work better for fine-tuning on tasks that are similar with the source domain, while nonzero momentum works better for target domains that are different from the source domain. • Hyperparameters are coupled together and it is the effective learning rate—which encap-
sulates the learning rate and momentum—that matters for fine-tuning performance. While effective learning rate has been studied for training from scratch, to the best of our knowledge, no previous work investigates effective learning rate for fine-tuning and is less used in practice. Our observation of momentum can be explained as small momentum actually decreases the effective learning rate, which is more suitable for fine-tuning on similar tasks. We show that the optimal effective learning rate depends on the similarity between the source and target domains. • We find regularization methods that were designed to keep models close to the initial
model does not necessarily work for “dissimilar” datasets, especially for nets with Batch Normalization. Simple weight decay can result in as good performance as the referencebased regularization methods for fine-tuning with better search space.
2 RELATED WORK
In transfer learning for image classification, the last layer of a pre-trained network is usually replaced with a randomly initialized fully connected layer with the same size as the number of classes in the target task (Simonyan & Zisserman, 2015). It has been shown that fine-tuning the whole network usually results in better performance than using the network as a static feature extractor (Yosinski et al., 2014; Donahue et al., 2014; Huh et al., 2016; Mormont et al., 2018; Kornblith et al., 2019). Ge & Yu (2017) select images that have similar local features from source domain to jointly fine-tune pre-trained networks. Cui et al. (2018) estimate domain similarity with ImageNet and demonstrate that transfer learning benefits from pre-training on a similar source domain. Besides image classification, many object detection frameworks also rely on fine-tuning to improve over training from scratch (Girshick et al., 2014; Ren et al., 2015).
Many researchers re-examined whether fine-tuning is a necessity for obtaining good performance. Ngiam et al. (2018) find that when domains are mismatched, the effectiveness of transfer learning is negative, even when domains are intuitively similar. Kornblith et al. (2019) examine the fine-tuning performance of various ImageNet models and find a strong correlation between ImageNet top-1 accuracy and the transfer accuracy. They also find that pre-training on ImageNet provides minimal benefits for some fine-grained object classification dataset. He et al. (2019) questioned whether ImageNet pre-training is necessary for training object detectors. They find the solution of training from scratch is no worse than the fine-tuning counterpart as long as the target dataset is large enough. Raghu et al. (2019) find that transfer learning has negligible performance boost on medical imaging applications, but speed up the convergence significantly.
There are many literatures on hyperparameter selection for training neural networks from scratch, mostly on batch size, learning rate and weight decay (Goyal et al., 2017; Smith et al., 2018; Smith & Topin, 2019). There are few works on the selection of momentum (Sutskever et al., 2013). Zhang & Mitliagkas (2017) proposed an automatic tuner for momentum and learning rate in SGD. There are also studies on the correlations of the hyperparameters, such as linear scaling rule between batch size and learning (Goyal et al., 2017; Smith et al., 2018; Smith, 2017). However, most of these advances on hyperparameter tuning are designed for training from scratch and have not examined on fine-tuning tasks for computer vision problems. Most work on fine-tuning simply choose fixed hyperparameters (Cui et al., 2018) or use dataset-dependent learning rates (Li et al., 2018) in their experiments. Due to the huge computational cost for hyperparameter search, only a few works (Kornblith et al., 2019; Mahajan et al., 2018) performed large-scale grid search of learning rate and weight decay for obtaining the best performance.
3 TUNING HYPERPARAMETERS FOR FINE-TUNING
In this section, we first introduce the notations and experimental settings, and then present our observations on momentum, effective learning rate and regularization. The fine-tuning process is not different from learning from scratch except for the weights initialization. The goal of the process is still to minimize the objective function L = 1N ∑N i=1 `(f(xi, θ), yi) + λ 2 ‖θ‖ 2 2, where ` is the loss function, N is the number of samples, xi is the input data, yi is its label, f is the neural network, θ is the model parameters and λ is the regularization hyperparameter or weight decay. Momentum is widely used for accelerating and smoothing the convergence of SGD by accumulating a velocity vector in the direction of persistent loss reduction (Polyak, 1964; Sutskever et al., 2013; Goh, 2017). The commonly used Nesterov’s Accelerated Gradient (Nesterov, 1983) is given by:
vt+1 = mvt − ηt 1
n n∑ i=1 ∇`(f(xi, θt +mvt), yi) (1)
θt+1 = θt + vt+1 − ηλθt (2) where θt indicates the model parameters at iteration t. The hyperparameters include the learning rate ηt, batch size n, momentum coefficient m ∈ [0, 1), and the weight decay λ.
3.1 EXPERIMENTAL SETTINGS
We evaluate fine-tuning on seven widely used image classification datasets, which covers tasks for fine-grained object recognition, scene recognition and general object recognition. Detailed statistics of each dataset can be seen in Table 1. We use ImageNet (Russakovsky et al., 2015), Places-365 (Zhou et al., 2018) and iNaturalist (Van Horn et al., 2018) as source domains for pre-trained models. We resize the input images such that the aspect ratio is preserved and the shorter side is 256 pixels. The images are normalized with mean and std values calculated over ImageNet. For data augmentation, we adopt the common practices used for training ImageNet models (Szegedy et al., 2015): random mirror, random scaled cropping with scale and aspect variations, and color jittering. The augmented images are resized to 224×224. Note that state-of-the-art results could achieve even better performance by using higher resolution images (Cui et al., 2018) or better data augmentation (Cubuk et al., 2018).
We mainly use ResNet-101-V2 (He et al., 2016a) as our base network, which is pre-trained on ImageNet (Russakovsky et al., 2015). Similar observations are also made on DenseNets (Huang et al., 2017) and MobileNet (Howard et al., 2017). The hyperparameters to be tuned (and ranges)
are: learning rate (0.1, 0.05, 0.01, 0.005, 0.001, 0.0001), momentum (0.9, 0.99, 0.95, 0.9, 0.8, 0.0) and weight decay (0.0, 0.0001, 0.0005, 0.001). We set the default hyperparameters to be batch size 2561, learning rate 0.01, momentum 0.9 and weight decay 0.0001. To avoid insufficient training and observe the complete convergence behavior, we use 300 epochs for fine-tuning and 600 epochs for scratch-training, which is long enough for the training curves to converge. The learning rate is decayed by a factor of 0.1 at epoch 150 and 250. We report the Top-1 validation (test) error at the end of training. The total computation time for the experiments is more than 10K GPU hours.
3.2 EFFECT OF MOMENTUM AND DOMAIN SIMILARITY
Momentum 0.9 is the most widely used value for training from scratch (Krizhevsky et al., 2012; Simonyan & Zisserman, 2015; He et al., 2016b) and is also widely adopted for fine-tuning (Kornblith et al., 2019). To the best of our knowledge, it is rarely changed, regardless of the network architectures or target tasks. To check the influence of momentum on fine-tuning, we first search for the best momentum value for fine-tuning on the Birds dataset with different weight decay and learning rate. Figure 1(a) shows the performance of fine-tuning with and without weight decays. Surprisingly, momentum zero actually outperforms the nonzero momentum. The optimal learning rate also increases when the momentum is disabled as shown in Figure 1(b).
To verify this observation, we further compare momentum 0.9 and 0.0 on other datasets. Table 2 shows the performance of 8 hyperparameter settings on 7 datasets. We observe a clear pattern that disabling momentum works better for Dogs, Caltech and Indoor, while momentum 0.9 works better for Cars, Aircrafts and Flowers.
1 For each training job with ResNet-101 and batch size 256, we use 8 NVIDIA Tesla V100 GPUs for synchronous training, where each GPU uses a batch of 32 and no SyncBN is used.
Interestingly, datasets such as Dogs, Caltech, Indoor and Birds are known to have high overlap with ImageNet dataset2, while Cars and Aircrafts are identified to be difficult to benefit from fine-tuning from pre-trained ImageNet models (Kornblith et al., 2019). According to Cui et al. (2018), in which the Earth Mover’s Distance (EMD) is used to calculate the similarity between ImageNet and other domains, the similarity to Dogs and Birds are 0.619 and 0.563, while the similarity to Cars, Aircrafts and Flowers are 0.560, 0.556 and 0.5253. The relative order of similarities to ImageNet is
Dogs, Birds, Cars, Aircrafts and Flowers
which aligns well with the transition of optimal momentum value from 0.0 to 0.9. Following the similarity calculation, we can also verified Caltech and Indoor are more close to ImageNet than Cars/Aircrafts/Flowers (Table 3.3).
To verify the connection between momentum and domain similarity, we further fine-tune from different source domains such as Places-365 and iNaturalist, which are known to be better source domains than ImageNet for fine-tuning on Indoor and Birds dataset (Cui et al., 2018). We may expect that fine-tuning from iNaturalist works better for Birds with m = 0 and similarly, Places for Indoor. Indeed, as shown in Table 3, disabling momentum improves the performance when the source and target domain are similar, such as Places for Indoor and iNaturalist for Birds.
Small momentum works better for fine-tuning on domains that are close to the source domain One explanation for the above observations is that because the Dogs dataset is very close to ImageNet, the pre-trained ImageNet model is expected to be close to the fine-tuned solution on the Dogs dataset. In this case, momentum may not help much as the gradient direction around the minimum could be much random and accumulating the momentum direction could be meaningless. Whereas, for
2Stanford Dogs (Khosla et al., 2011) was built using images and annotation from ImageNet for the task of fine-grained image categorization. Caltech-256 has at least 200 categories exist in ImageNet (Deng et al., 2010). Images in the CUB-Birds dataset overlap with images in ImageNet.
3The domain similarity calucation is discussed in Appendix B and the exact value can be found in Table 3.3.
faraway target domains (e.g., Cars and Aircrafts) where the pre-trained ImageNet model could be much different with the fine-tuned solution, the fine-tuning process is more similar with training from scratch, where large momentum stabilizes the decent directions towards the minimum. An illustration of the difference can be found in Figure 2.
Connections to early observations on decreasing momentum Early work (Sutskever et al., 2013) actually pointed out that reducing momentum during the final stage of training allows finer convergence while aggressive momentum would prevent this. They recommended reducing momentum from 0.99 to 0.9 in the last 1000 parameter updates but not disabling it completely. Recent work (Liu et al., 2018; Smith, 2018) showed that a large momentum helps escape from saddle points but can hurt the final convergence within the neighborhood of the optima, implying that momentum should be reduced at the end of training. Liu et al. (2018) find that a larger momentum introduces higher variance of noise and encourages more exploration at the beginning of optimization, and encourages more aggressive exploitation at the end of training. They suggest that at the final stage of the step size annealing, momentum SGD should use a much smaller step size than that of vanilla SGD. When applied to fine-tuning, we can interpret that if the pre-trained model lies in the neighborhood of the optimal solution on the target dataset, the momentum should be small. Our work identifies the empirical evidence of disabling momentum helps final convergence, and fine-tuning on close domains is a good exemplar.
3.3 COUPLED HYPERPARAMETERS AND THE VIEW OF EFFECTIVE LEARNING RATE
Now that we had discovered the effect of momentum by fixing other hyperparameters and only allowed momentum to change. But note that the two difficult scenarios shown in Figure 2 (b) and (c) might also be mitigated by increasing or decreasing learning rate. That is, hyperparameters are coupled and varying one hyperparameter can change the optimal values of the other hyperparameters that lead to the best performance. In addition, optimal values of certain hyperparameters depend on the values of other hyperparameters in systematic ways. For example, learning rate is entangled with batch size, momentum and weight decay. There is a notion of effective learning rate (ELR) (Hertz et al., 1991; Smith et al., 2018; Smith & Le, 2018) for SGD with momentum: η′ = η/(1−m), which was shown to be more closely related with training dynamics and final performance rather than η. The effective learning rate with m = 0.9 is 10× higher than the one with m = 0.0 if other hyperparameters are fixed, which is probably why we see an increase in optimal learning rate when momentum is disabled in Figure 1(b) and Appendix A.
It is the effective learning rate that matters for fine-tuning performance Because hyperparameters are coupled, looking at the performance with only one hyperparameter varied may give a
misleading understanding of the effect of hyperparameters. Therefore, to examine the effect of momentum, we should report the best result obtainable with and without momentum, as long as other hyperparameters explored are sufficiently explored. We re-examine previous experiments that demonstrated the importance of momentum tuning when the ELR η′ = η/(1 − m) is held fixed instead of simply fixing learning rate η. Figure 3 shows that when η′ is constant, the best performance obtained by m = 0.9 and m = 0 are almost equivalent when other hyperparameters are allowed to change. However, different ELR does result in different performance, which indicates its importance for the best performance. It explains why the common practice of changing only learning rate generally works, though changing momentum may results in the same result, they both change the ELR. In fact, as long as the initial learning rate is small enough, we can always search for the optimal momentum as it is an amplifier, making the ELR larger by a factor of 1/(1−m). Therefore, momentum does determine the search ranges of learning rate.
Optimal ELR depends on the similarity between source domain and target domain Now that we have shown ELR is critical for fine-tuning performance, we are interested in the factors that determine the optimal ELR for a given task. Previous work (Smith & Le, 2018) found that there is an optimum ELR which maximizes the test accuracy. However, the observations are only based on scratch training on small datasets (e.g., CIFAR-10); the relationship between ELR and domain similarity, especially for fine-tuning, is still unexplored. To examine this, we search the best ELR on each fine-tuning task and reports in Fig. 4 the best validation error obtained by each ELR while allowing other hyperparameters to change. It shows the optimal ELR depends on both source domain and target domain. As shown in Fig. 4 (a-c), the optimal ELR for Dogs/Caltech/Indoor are much smaller than these for Aircrafts/Flowers/Cars when fine-tuned from ImageNet pre-trained model. Similar observations can be made on DenseNets and MobileNet. Though the optimal ELR value is different, the relative order of domain similarity is consistent and architecture agnostic. We can also see a smaller ELR works better when source domain and target domain are similar, such as Dogs for ImageNet and Birds for iNat2017 (Fig. 4 (a, d-e)). Interestingly, the optimal ELR for training from scratch is much larger and very similar across different target datasets, which indicates the distance from a random initialization is uniformly similar to different target dataset.
10 4 10 3 10 2 10 1 100
10 5 10 4 10 3 10 2 10 1 100
Optimal ELR selection based on domain similarity Now we have made qualitative observations about the relationship between domain similarity and optimal ELR. A quantitative characterization of the relationship could reduce the hyperparameter search ranges for HPO or even eliminate HPO by accurately predicting hyperparameters. We followed the domain similarity calculation in (Cui et al., 2018) and recalculate similarity scores for all source-target domain pairs. Note the original domain similarity calculation in (Cui et al., 2018) use pre-trained JFT (Sun et al., 2017) models as feature extractor, which are not public available. We alternatively use ImageNet pre-trained model or the source model as feature extractor. As shown in Table 4, there is a good correlation between domain similarity score and the scale of optimal ELR. Generally, the more similar the two domains, the smaller the optimal ELR. Though it is not strictly corresponding to the domain similarity score, the score provides reasonable prediction about the scale of optimal ELR, such as [0.001, 0.01], [0.01, 0.1], [0.1, 1.0] and therefore can reduce the search space for optimal ELR. Based on this correlation, a simple strategy can be developed for optimal ELR selection given a frequently used source model: one can calculate domain similarities and perform exhaustive hyperparameter searches for few reference datasets, including similar and dissimilar datasets. Then given a new dataset to fine-tune, one can calculate the domain similarity and compare with the scores of reference datasets, and choose the range of ELRs with the closest domain similarity.
Weight Decay and Learning Rate The relationship between weight decay and effective learning rate is recently well-studied (van Laarhoven, 2017; Zhang et al., 2018; Loshchilov & Hutter, 2018). It was shown that the effect of weight decay on models with BN layers is equivalent to increasing the ELR by shrinking the weights scales, i.e., η′ ∼ η/‖θ‖22. And if the optimal effective learning rate exists, the optimal weight decay value λ is inversely related with the optimal learning rate η. The ‘effective’ weight decay is λ′ = λ/η. We show in Figure 5 that the optimal effective weight decay is also correlated with domain similarity.
10 3 10 2 10 1 100 101 102
3.4 THE CHOICE OF REGULARIZATION
L2 regularization or weight decay is widely used for constraining the model capacity (Hanson & Pratt, 1989; Krogh & Hertz, 1992). Recently Li et al. (2018; 2019) pointed out that standard L2 regularization, which drives the parameters towards the origin, is not adequate in transfer learning. To retain the knowledge learned by the pre-trained model, reference-based regularization was used to regularize the distance between fine-tuned weights and the pre-trained weights, so that the finetuned model is not too different from the initial model. Li et al. (2018) propose L2-SP norm, i.e., λ12 ‖θ ′ − θ0‖22 + λ22 ‖θ ′′‖22, where θ′ refers to the part of network that shared with the source network, and θ′′ refers to the novel part, e.g., the last layer with different number of neurons. While the motivation is intuitive, there are several issues for adopting reference based regularization for fine-tuning:
• Many applications actually adopt fine-tuning on target domains that are quite different from source domain, such as fine-tuning ImageNet models for medical imaging (Mormont et al., 2018; Raghu et al., 2019). The fine-tuned model does not necessarily have to be close with the initial model.
• The scale invariance introduced by Batch Normalization (BN) (Ioffe & Szegedy, 2015) layers enable models with different parameter scales to function the same, i.e., f(θ) = f(αθ). Therefore, when L2 regularization drives ‖θ‖22 towards zeros, it could still have the same functionality as the initial model. On the contrary, a model could still be different even when the L2-SP norm is small.
• L2-SP regularization would constrain θ′′ to be close to θ0, so that ‖θ‖22 is relatively stable in comparison with L2 regularization. Given that ELR is approximately proportional to η/‖θ‖22 and a smaller ELR is beneficial for fine-tuning from similar domains, it may explain why L2-SP provides better performance. If this is true, then by decreasing the initial ELR, L2-norm may function the same.
To examine these conjectures, we revisited the work of (Li et al., 2018) with additional experiments. To show the effectiveness of L2-SP norm, the authors conducted experiments on datasets such as Dogs, Caltech and Indoor, which are all close to the source domain (ImageNet or Places-365). We extend their experiments by fine-tuning on both “similar” and “dissimilar” datasets, including Birds, Cars, Aircrafts and Flowers, with both L2 and L2-SP regularization (details in Appendix D). For fair comparison, we perform the same hyperparameter search for both methods. As expected, Table 5 shows that L2 regularization is very competitive with L2-SP on Birds, Cars, Aircrafts and Flowers, which indicates that reference based regularization may not generalize well for fine-tuning on dissimilar domains.
We also check the change of regularization terms during training for both methods as well as their best hyperparameters. As shown in Figure 6, the L2 regularization usually decrease the weights norm more aggressively, depending on the value of λ, while L2-SP regularization keeps the norm less changed. We can see that the optimal learning rate of L2 regularization is mostly smaller than L2-SP, which may compensate for the decreased weight norm or increased ELR. Interestingly, for Dogs dataset, both regularization terms grow much larger after a few iterations and then become stable, which means constraining the weights to be close to initialization is not necessarily the reason for L2-SP to work even for close domains. It also seems contradicting to previous finding (Zhang et al., 2018) that weight decay functions as increasing ELR by decreasing weight norms. However, it
might be reasonable as large norm actually decreases the ELR, which could be helpful due to the close domain similarity between Dogs and ImageNet.
4 DISCUSSION
The two extreme ways for selecting hyperparameters—performing exhaustive hyperparameter search or taking ad-hoc hyperparameters from scratch training—could be either too computationally expensive or yield inferior performance. Different from training from scratch, where the default hyperparameter setting may work well for random initialization, the choice of hyperparameters for fine-tuning is not only dataset dependent but is also influenced by the similarity between the target and source domains. The rarely tuned momentum value could also improve or impede the performance when the target domain and source domain are close given insufficiently searched learning rate. These observations connect with previous theoretical works on decreasing momentum at the end of training and effective learning rate. We further identify that the optimal effective learning rate correlates with the similarity between the source and target domains. With this understanding, one can significantly reduce the hyperparameter search space. We hope these findings could be one step towards better and efficient hyperparameter selection for fine-tuning.
ACKNOWLEDGMENTS
The authors would like to thank all anonymous reviewers for their valuable feedback.
A THE EFFECTIVENESS OF MOMENTUM
Searching for Optimal Momentum To check the effectiveness of momentum on fine-tuning, we can search the best momentum values for fine-tuning with fixed learning rate but different weight decay and batch size. Taking Birds dataset as an example, Figure 7 provides the convergence curves for the results shown in Figure 1(a), which shows the learning curves of fine-tuning with 6 different batch sizes and weight decay combinations. Zero momentum outperforms the nonzero momentum in 5 out of 6 configurations.
Effective learning rate increases after disabling momentum. Figure 8 compares the performance of with and without momentum for Dogs dataset with a range of different learning rates. Note that the learning rate with similar performance generally increases 10x after changing m from 0.9 to 0.0, which is coherent with the rule of effective learning rate η′ = η/(1−m). Same observations can be made on other datasets as shown in Figure 9.
0 50 100 150 200 250 300 Epochs
0
10
20
30
40
50
60
Er ro
r
resnet101_v2, n = 256, m = 0.9, = 0.0001
= 0.1, 20.69 = 0.05, 19.07 = 0.01, 14.85 = 0.005, 13.42 = 0.001, 12.07 = 0.0005, 11.64 = 0.0001, 14.70
(a) Caltech, m = 0.9 0 50 100 150 200 250 300 Epochs
0
10
20
30
40
50
60
Er ro
r
resnet101_v2, n = 256, m = 0.0, = 0.0001
= 0.1, 14.67 = 0.05, 13.29 = 0.01, 12.11 = 0.005, 11.86 = 0.001, 14.62 = 0.0005, 19.39 = 0.0001, 81.26
(b) Caltech, m = 0.0 0 50 100 150 200 250 300 Epochs
0
10
20
30
40
50
60
Er ro
r
resnet101_v2, n = 256, m = 0.9, = 0.0001
= 0.1, 27.29 = 0.05, 25.64 = 0.01, 23.76 = 0.005, 24.59 = 0.001, 22.34 = 0.0005, 21.29 = 0.0001, 29.39
(c) Indoor, m = 0.9 0 50 100 150 200 250 300 Epochs
0
10
20
30
40
50
60
Er ro
r
resnet101_v2, n = 256, m = 0.0, = 0.0001
= 0.1, 23.46 = 0.05, 22.04 = 0.01, 21.14 = 0.005, 21.96 = 0.001, 29.69 = 0.0005, 41.00 = 0.0001, 88.23
(d) Indoor, m = 0.0
B DOMAIN SIMILARITY
The domain similarity calculation based on Earth Mover Distance (EMD) is introduced in the section 4.1 of (Cui et al., 2018)4. Here we briefly introduce the steps. In (Cui et al., 2018), the authors first train ResNet-101 on the large scale JFT dataset (Sun et al., 2017) and use it as a feature extractor. They extracted features from the penultimate layer of the model for each image of the training set of the source domain and target domain. For ResNet-101, the length of the feature vector is 2048. The features of images belonging to the same category are averaged and g(si) denotes the average feature vector of ith label in source domain S, similarly, g(tj) denotes the average feature vector of jth label in target domain T . The distance between the averaged features of two labels is di,j = ‖g(si) − g(tj)‖. Each label is associated with a weight w ∈ [0, 1] corresponding to the percentage of images with this label in the dataset. So the source domain S with m labels and the target domain T with n labels can be represented as S = {(si, wsi)}mi=1 and T = {(tj , wtj )}ni=1. The EMD between the two domains is defined as
d(S, T ) = EMD(S, T ) = ∑m,n i=1,j=1 fi,jdi,j∑m,n i=1,j=1 fi,j
(3)
where the optimal flow fi,j corresponds to the least amount of total work by solving the EMD optimization problem. The domain similarity is defined as
sim(S, T ) = e−γd(S,T ) (4)
where γ is 0.01. Note that the domain similarity value is not ranging from 0 to 1.
Due to the unavailability of the large-scale JFT dataset (300x larger than ImageNet) and its pre-trained ResNet-101 model, we cannot use it for extracting features for new datasets, such as Caltech256 and
4The extracted features and code are available in https://github.com/richardaecn/ cvpr18-inaturalist-transfer
MIT67-Indoor. Instead of using the powerful feature representation, we use our pre-trained ImageNet model (ResNet-101) as the feature extractor. Table 4 compares the domain similarities calculated by different pre-trained models and we can see some consistent patterns across different architectures: e.g., The 1st and 2nd highest similarity scores are Caltech and Dogs regardless of architectures; the 3rd and 4th highest similarity scores refers to Birds and Indoor; the most dissimilar datasets are Cars, Aircrafts and Flowers, though the relative orders for them are not exactly the same. Besides using fixed feature extractor, an alternative way is to use the source domain model directly as the feature extractor for both source domain and target domain, which may captures the transfer learning process more precisely than a uniform feature extractor.
C THE EFFECTIVENESS OF BN MOMENTUM
Kornblith et al. (2019) conducted extensive fine-tuning experiments with different hyperparameters. One observation they made is that the momentum parameter of BN layer is essential for finetuning. They found it useful to decrease the BN momentum parameter from its ImageNet value to max(1− 10/s, 0.9) where s is the number of steps per epoch. This will change the default BN momentum value (0.9) when s is larger than 100, but it only applies when the dataset size is larger than 25.6K with batch size 256. The maximum data size used in our experiments is Caltech-256, which is 15K, so this strategy seems not applicable.
We further validate the effect of BN momentum by performing a similar study as to ELR. The goal is to identify whether there is an optimal BN momentum for a given task. For each dataset, we fine-tune the pre-trained model using previously obtained best hyperparameters and only vary BN momentum. In addition to the default value 0.9, we also set it to 0.0, 0.95 and 0.99. The rational is that if BN mommentum is a critical hyperparameter, we should expect significant performance differences when the value is changed from the optimal value. As shown in Figure 10, we can see mbn = 0.99 slightly improves the performance for some datasets, however, there is no significant performance difference among values greater than 0.9. One hypothesis is that similar domains will share similar BN parameters and statistics, BN momentum may affect the parameter adaptation. More investigation is still needed to fully understand its effectiveness.
D EXPERIMENTAL SETTINGS FOR COMPARISON OF L2 AND L2-SP
The experiments in Section 3.4 is based the code5 provided by (Li et al., 2018). The base network is ImageNet pretrained ResNet-101-V1. The model is fine-tuned with batch size 64 for 9000 iterations, and learning rate is decayed once at iteration 6000. Following the original setting, we use momentum 0.9. We performed grid search on learning rate and weight decay, with the range of η : {0.02, 0.01, 0.005, 0.001, 0.0001} and λ1 : {0.1, 0.01, 0.001, 0.0001}, and report the best average class error (1 - average accuracy) for both methods. For L2-SP norm, we follow the authors’ setting to use constant λ2 = 0.01. Different with the original setting for L2 regularization, we set λ2 = λ1 to simulate normal L2-norm.
5 https://github.com/holyseven/TransferLearningClassification
E DATA AUGMENTATION
Data augmentation is an important way of increasing data quantity and diversity to make models more robust. It is even critical for transfer learning with few instances. The effect of data augmentation can be viewed as a regularization and the choice of data augmentation can be also viewed as a hyperparameter. Most current widely used data augmentation methods have verified their effectiveness on training ImageNet models, such as random mirror flipping, random rescaled cropping6, color jittering and etc (Szegedy et al., 2015; Xie et al., 2018).
Do these methods transfer for fine-tuning on other datasets? Here we compare three settings for data augmentation with different momentum settings: 1) random resized cropping: our default data augmentation; 2) random cropping: the same as standard data augmentation except that we use random cropping with fixed size; 3) random flip: simply random horizontal flipping. The training and validation errors of fine-tuning with different data augmentation strategies and hyperparameters are shown in Figure 11 and Figure 12.
The effect of data augmentation is dataset dependent and is also influenced by other hyperparameters The first row in Figure 11 shows that advanced data augmentation with default hyperparameters (m = 0.9 and η = 0.01) leads to overfitting for Dogs while generalize better on Aircrafts and Flowers. Similar observations can be made in Figure 12. However, when momentum is disabled, the overfitting disappears for Dogs and Caltech. This is explainable since random resized cropping adds more variance to the gradient direction, and disabling momentum will lead to a smaller ELR which will be helpful for fine-tuning from a similar domain. On the other hand, the performance of random cropping decreases when momentum is disabled. As random cropping produces training samples with less variation than random resized cropping, disabling momentum or decreasing the ELR might lead to underfitting or stucking in poor local minima. This can be mitigated as we increase the learning rate for random cropping, which adds variation to the gradients. As shown in Table 6,
6Randomly crop a rectangular region with aspect ratio randomly sampled in [3/4, 4/3] and area randomly sampled in [8%, 100%] (Szegedy et al., 2015)
0 50 100 150 200 250 300 Epochs
0
5
10
15
20
25
30
35 40 Er ro r
resnet101_v2, = 0.01, n = 256, m = 0.9, = 0.0001
rand resized crop, 14.85 rand crop, 12.42 rand flip, 12.34
(a) Caltech, m = 0.9 0 50 100 150 200 250 300 Epochs
0
5
10
15
20
25
30
35
40
Er ro
r
resnet101_v2, = 0.01, n = 256, m = 0.0, = 0.0001
rand resized crop, 12.11 rand crop, 12.89
(b) Caltech, m = 0.0 0 50 100 150 200 250 300 Epochs
0
5
10
15
20
25
30
35
40
Er ro
r
resnet101_v2, = 0.01, n = 256, m = 0.9, = 0.0001
rand resized crop, 23.76 rand crop, 23.39 rand flip, 23.31
(c) Indoor, m = 0.9 0 50 100 150 200 250 300 Epochs
0
5
10
15
20
25
30
35
40
Er ro
r
resnet101_v2, = 0.01, n = 256, m = 0.0, = 0.0001
rand resized crop, 21.14 rand crop, 25.19
(d) Indoor, m = 0.0
when learning rate increased fro 0.01 to 0.05, disabling momentum shows better performance than nonzero momentum on datasets that are close, similar to previous findings with random resized cropping.
F SOURCE DOMAINS
Pre-trained models For most of our experiments, we use the pre-trained ResNet-101_v2 model from the model zoo of MXNet GluonCV 7. To get the pre-trained models for iNat-2017 and Places365, we fine-tune from the ImageNet pre-trained model with the default fine-tuning hyperparameters for 60 epochs, where learning rate is decayed at epoch 45 by a factor of 10. Table 7 illustrates the Top-1 errors of each pre-trained model on their validation sets.
Training from Scratch with HPO The default hyperparameters for training from scratch are η = 0.1, λ = 0.0001, m = 0.9 and n = 256. We train 600 epochs, and decay the learning rate at epoch 400 and 550 by a factor of 10. To perform Hyperparameter Optimization (HPO), we search hyperparameters in the following space: η ∈ [0.1, 0.2, 0.5] and λ ∈ [0.0001, 0.0005]. Figure 13 shows the training/validation errors of training from scratch on each dataset with different learning rate and weight decay. We observe weight decay 0.0005 consistently performs better than 0.0001.
Insufficient hyperparameter search may lead to miss-leading conclusion To show the importance of hyperparameter tuning, Table 8 compares the performance with and without hyperparameter tuning for both fine-tuning and training from scratch tasks. With the default hyperparameters, some inappropriate conclusions might be made, e.g., “there is significant gap between fine-tuning and training from scratch", “fine-tuning always surpass training from scratch" or “fine-tuning from iNat cannot beat the performance of ImageNet". However, with HPO, those statements may not be valid. For example, training from scratch surpass the default fine-tuning result on Cars and Aircrafts and the gap between fine-tuning and training from scratch is much smaller. Previous studies (Kornblith et al., 2019; Cui et al., 2018) also identified that datasets like Cars and Aircrafts do not benefit too much from fine-tuning. | 1. What is the focus of the paper regarding image recognition models?
2. What are the strengths and weaknesses of the paper's approach to studying hyperparameters?
3. Do you have any concerns about the importance of momentum in finetuning performance?
4. How do you assess the clarity and readability of the paper's figures and writing?
5. What are your suggestions for promoting reproducibility in this type of research?
6. Are there any minor issues or typos in the paper that should be addressed? | Review | Review
This paper studies the role of different hyperparameters in finetuning image recognition models on new target tasks. The authors run a large set of experiments and show that, perhaps non-surprisingly, hyperparameters matter. In particular, they show that momentum, which is typically ignored in finetuning, is quite important, and that the momentum values that work well depend on the similarity between the source and target datasets. They also show important correlations between momentum, learning rate, and weight decay.
Overall, despite some issues detailed below, the paper is clearly written, presents a coherent story, and its conclusions will be useful to the community.
Comments:
1. My main concern about this paper relates to the importance of momentum. The authors argue that this hyperparameter is "critical for fine-tuning performance". However, they later show that in fact what matters is the ratio between the learning rate (LR) and the momentum. In this case, it might be justified to fix the momentum value and only modify the LR, as often done.
2. The EMD values of Birds, Cars and Aircrafts are within 0.7 points of each other (while Dogs is much higher and Flowers is quite lower). Although I am not too familiar with this method, I find it somewhat hard to believe that these small differences explain the error differences on Table 2.
3. While the paper is fairly clear in writing, the figures (e.g., fig. 3 and 4) are extremely hard to read on print, and thus hard to draw conclusions from. Figure 4 is confusing also on screen.
4. To promote reproducibility, it would be better to report in this kind of research validation rather than test results. There is some confusion in Figure 4, the axes say validation error, while the caption says test error, but in the other figures test results are reported.
Minor:
1. The authors say in the intro "Even when there is enough training data, fine-tuning is still preferred as it often reduces training time significantly (He et al., 2019).", but later make a somewhat contradictory claim: "He et al. (2019) questioned whether ImageNet pre-training is necessary for training object detectors. They find the solution of training from scratch is no worse than the fine-tuning counterpart as long as the target dataset is large enough.".
2. A couple of typos around the paper:
- section 2: "However, most of these advances on hyperparameter tuning are designed *from* training from scratch" (should be "for")
- The first sentence of 3.3 is ungrammatical |
ICLR | Title
Rethinking the Hyperparameters for Fine-tuning
Abstract
Fine-tuning from pre-trained ImageNet models has become the de-facto standard for various computer vision tasks. Current practices for fine-tuning typically involve selecting an ad-hoc choice of hyperparameters and keeping them fixed to values normally used for training from scratch. This paper re-examines several common practices of setting hyperparameters for fine-tuning. Our findings are based on extensive empirical evaluation for fine-tuning on various transfer learning benchmarks. (1) While prior works have thoroughly investigated learning rate and batch size, momentum for fine-tuning is a relatively unexplored parameter. We find that the value of momentum also affects fine-tuning performance and connect it with previous theoretical findings. (2) Optimal hyperparameters for fine-tuning, in particular, the effective learning rate, are not only dataset dependent but also sensitive to the similarity between the source domain and target domain. This is in contrast to hyperparameters for training from scratch. (3) Reference-based regularization that keeps models close to the initial model does not necessarily apply for “dissimilar” datasets. Our findings challenge common practices of finetuning and encourages deep learning practitioners to rethink the hyperparameters for fine-tuning.
1 INTRODUCTION
Many real-world applications often have a limited number of training instances, which makes directly training deep neural networks hard and prone to overfitting. Transfer learning with the knowledge of models learned on a similar task can help to avoid overfitting. Fine-tuning is a simple and effective approach of transfer learning and has become popular for solving new tasks in which pre-trained models are fine-tuned with the target dataset. Specifically, fine-tuning on pre-trained ImageNet classification models (Simonyan & Zisserman, 2015; He et al., 2016b) has achieved impressive results for tasks such as object detection (Ren et al., 2015) and segmentation (He et al., 2017; Chen et al., 2017) and is becoming the de-facto standard of solving computer vision problems. It is believed that the weights learned on the source dataset with a large number of instances provide better initialization for the target task than random initialization. Even when there is enough training data, fine-tuning is still preferred as it often reduces training time significantly (He et al., 2019).
The common practice of fine-tuning is to adopt the default hyperparameters for training large models while using smaller initial learning rate and shorter learning rate schedule. It is believed that adhering to the original hyperparameters for fine-tuning with small learning rate prevents destroying the originally learned knowledge or features. For instance, many studies conduct fine-tuning of ResNets (He et al., 2016b) with these default hyperparameters: learning rate 0.01, momentum 0.9 and weight decay 0.0001. However, the default setting is not necessarily optimal for fine-tuning on other tasks. While few studies have performed extensive hyperparameter search for learning rate and weight decay (Mahajan et al., 2018; Kornblith et al., 2019), the momentum coefficient is rarely changed. Though the effectiveness of the hyperparameters has been studied extensively for training a model from scratch, how to set the hyperparameters for fine-tuning is not yet fully understood.
∗Work done while at Amazon Web Services
In addition to using ad-hoc hyperparameters, commonly held beliefs for fine-tuning also include:
• Fine-tuning pre-trained networks outperforms training from scratch; recent work (He et al., 2019) has already revisited this. • Fine-tuning from similar domains and tasks works better (Ge & Yu, 2017; Cui et al., 2018;
Achille et al., 2019; Ngiam et al., 2018). • Explicit regularization with initial models matters for transfer learning performance (Li
et al., 2018; 2019).
Are these practices or beliefs always valid? From an optimization perspective, the difference between fine-tuning and training from scratch is all about the initialization. However, the loss landscape of the pre-trained model and the fine-tuned solution could be much different, so as their optimization strategies and hyperparameters. Would the hyperparameters for training from scratch still be useful for fine-tuning? In addition, most of the hyperparameters (e.g., batch size, momentum, weight decay) are frozen; will the conclusion differ when some of them are changed?
With these questions in mind, we re-examined the common practices for fine-tuning. We conducted extensive hyperparameter search for fine-tuning on various transfer learning benchmarks with different source models. The goal of our work is not to obtain state-of-the-art performance on each fine-tuning task, but to understand the effectiveness of each hyperparameter for fine-tuning, avoiding unnecessary computation. We explain why certain hyperparameters work so well on certain datasets while fail on others, which can guide hyperparameter search for fine-tuning.
Our main findings are as follows:
• Optimal hyperparameters for fine-tuning are not only dataset dependent, but are also dependent on the similarity between the source and target domains, which is different from training from scratch. Therefore, the common practice of using optimization schedules derived from ImageNet training cannot guarantee good performance. It explains why some tasks are not achieving satisfactory results after fine-tuning because of inappropriate hyperparameter selection. Specifically, as opposed to the common practice of rarely tuning the momentum value beyond 0.9, we find that zero momentum sometimes work better for fine-tuning on tasks that are similar with the source domain, while nonzero momentum works better for target domains that are different from the source domain. • Hyperparameters are coupled together and it is the effective learning rate—which encap-
sulates the learning rate and momentum—that matters for fine-tuning performance. While effective learning rate has been studied for training from scratch, to the best of our knowledge, no previous work investigates effective learning rate for fine-tuning and is less used in practice. Our observation of momentum can be explained as small momentum actually decreases the effective learning rate, which is more suitable for fine-tuning on similar tasks. We show that the optimal effective learning rate depends on the similarity between the source and target domains. • We find regularization methods that were designed to keep models close to the initial
model does not necessarily work for “dissimilar” datasets, especially for nets with Batch Normalization. Simple weight decay can result in as good performance as the referencebased regularization methods for fine-tuning with better search space.
2 RELATED WORK
In transfer learning for image classification, the last layer of a pre-trained network is usually replaced with a randomly initialized fully connected layer with the same size as the number of classes in the target task (Simonyan & Zisserman, 2015). It has been shown that fine-tuning the whole network usually results in better performance than using the network as a static feature extractor (Yosinski et al., 2014; Donahue et al., 2014; Huh et al., 2016; Mormont et al., 2018; Kornblith et al., 2019). Ge & Yu (2017) select images that have similar local features from source domain to jointly fine-tune pre-trained networks. Cui et al. (2018) estimate domain similarity with ImageNet and demonstrate that transfer learning benefits from pre-training on a similar source domain. Besides image classification, many object detection frameworks also rely on fine-tuning to improve over training from scratch (Girshick et al., 2014; Ren et al., 2015).
Many researchers re-examined whether fine-tuning is a necessity for obtaining good performance. Ngiam et al. (2018) find that when domains are mismatched, the effectiveness of transfer learning is negative, even when domains are intuitively similar. Kornblith et al. (2019) examine the fine-tuning performance of various ImageNet models and find a strong correlation between ImageNet top-1 accuracy and the transfer accuracy. They also find that pre-training on ImageNet provides minimal benefits for some fine-grained object classification dataset. He et al. (2019) questioned whether ImageNet pre-training is necessary for training object detectors. They find the solution of training from scratch is no worse than the fine-tuning counterpart as long as the target dataset is large enough. Raghu et al. (2019) find that transfer learning has negligible performance boost on medical imaging applications, but speed up the convergence significantly.
There are many literatures on hyperparameter selection for training neural networks from scratch, mostly on batch size, learning rate and weight decay (Goyal et al., 2017; Smith et al., 2018; Smith & Topin, 2019). There are few works on the selection of momentum (Sutskever et al., 2013). Zhang & Mitliagkas (2017) proposed an automatic tuner for momentum and learning rate in SGD. There are also studies on the correlations of the hyperparameters, such as linear scaling rule between batch size and learning (Goyal et al., 2017; Smith et al., 2018; Smith, 2017). However, most of these advances on hyperparameter tuning are designed for training from scratch and have not examined on fine-tuning tasks for computer vision problems. Most work on fine-tuning simply choose fixed hyperparameters (Cui et al., 2018) or use dataset-dependent learning rates (Li et al., 2018) in their experiments. Due to the huge computational cost for hyperparameter search, only a few works (Kornblith et al., 2019; Mahajan et al., 2018) performed large-scale grid search of learning rate and weight decay for obtaining the best performance.
3 TUNING HYPERPARAMETERS FOR FINE-TUNING
In this section, we first introduce the notations and experimental settings, and then present our observations on momentum, effective learning rate and regularization. The fine-tuning process is not different from learning from scratch except for the weights initialization. The goal of the process is still to minimize the objective function L = 1N ∑N i=1 `(f(xi, θ), yi) + λ 2 ‖θ‖ 2 2, where ` is the loss function, N is the number of samples, xi is the input data, yi is its label, f is the neural network, θ is the model parameters and λ is the regularization hyperparameter or weight decay. Momentum is widely used for accelerating and smoothing the convergence of SGD by accumulating a velocity vector in the direction of persistent loss reduction (Polyak, 1964; Sutskever et al., 2013; Goh, 2017). The commonly used Nesterov’s Accelerated Gradient (Nesterov, 1983) is given by:
vt+1 = mvt − ηt 1
n n∑ i=1 ∇`(f(xi, θt +mvt), yi) (1)
θt+1 = θt + vt+1 − ηλθt (2) where θt indicates the model parameters at iteration t. The hyperparameters include the learning rate ηt, batch size n, momentum coefficient m ∈ [0, 1), and the weight decay λ.
3.1 EXPERIMENTAL SETTINGS
We evaluate fine-tuning on seven widely used image classification datasets, which covers tasks for fine-grained object recognition, scene recognition and general object recognition. Detailed statistics of each dataset can be seen in Table 1. We use ImageNet (Russakovsky et al., 2015), Places-365 (Zhou et al., 2018) and iNaturalist (Van Horn et al., 2018) as source domains for pre-trained models. We resize the input images such that the aspect ratio is preserved and the shorter side is 256 pixels. The images are normalized with mean and std values calculated over ImageNet. For data augmentation, we adopt the common practices used for training ImageNet models (Szegedy et al., 2015): random mirror, random scaled cropping with scale and aspect variations, and color jittering. The augmented images are resized to 224×224. Note that state-of-the-art results could achieve even better performance by using higher resolution images (Cui et al., 2018) or better data augmentation (Cubuk et al., 2018).
We mainly use ResNet-101-V2 (He et al., 2016a) as our base network, which is pre-trained on ImageNet (Russakovsky et al., 2015). Similar observations are also made on DenseNets (Huang et al., 2017) and MobileNet (Howard et al., 2017). The hyperparameters to be tuned (and ranges)
are: learning rate (0.1, 0.05, 0.01, 0.005, 0.001, 0.0001), momentum (0.9, 0.99, 0.95, 0.9, 0.8, 0.0) and weight decay (0.0, 0.0001, 0.0005, 0.001). We set the default hyperparameters to be batch size 2561, learning rate 0.01, momentum 0.9 and weight decay 0.0001. To avoid insufficient training and observe the complete convergence behavior, we use 300 epochs for fine-tuning and 600 epochs for scratch-training, which is long enough for the training curves to converge. The learning rate is decayed by a factor of 0.1 at epoch 150 and 250. We report the Top-1 validation (test) error at the end of training. The total computation time for the experiments is more than 10K GPU hours.
3.2 EFFECT OF MOMENTUM AND DOMAIN SIMILARITY
Momentum 0.9 is the most widely used value for training from scratch (Krizhevsky et al., 2012; Simonyan & Zisserman, 2015; He et al., 2016b) and is also widely adopted for fine-tuning (Kornblith et al., 2019). To the best of our knowledge, it is rarely changed, regardless of the network architectures or target tasks. To check the influence of momentum on fine-tuning, we first search for the best momentum value for fine-tuning on the Birds dataset with different weight decay and learning rate. Figure 1(a) shows the performance of fine-tuning with and without weight decays. Surprisingly, momentum zero actually outperforms the nonzero momentum. The optimal learning rate also increases when the momentum is disabled as shown in Figure 1(b).
To verify this observation, we further compare momentum 0.9 and 0.0 on other datasets. Table 2 shows the performance of 8 hyperparameter settings on 7 datasets. We observe a clear pattern that disabling momentum works better for Dogs, Caltech and Indoor, while momentum 0.9 works better for Cars, Aircrafts and Flowers.
1 For each training job with ResNet-101 and batch size 256, we use 8 NVIDIA Tesla V100 GPUs for synchronous training, where each GPU uses a batch of 32 and no SyncBN is used.
Interestingly, datasets such as Dogs, Caltech, Indoor and Birds are known to have high overlap with ImageNet dataset2, while Cars and Aircrafts are identified to be difficult to benefit from fine-tuning from pre-trained ImageNet models (Kornblith et al., 2019). According to Cui et al. (2018), in which the Earth Mover’s Distance (EMD) is used to calculate the similarity between ImageNet and other domains, the similarity to Dogs and Birds are 0.619 and 0.563, while the similarity to Cars, Aircrafts and Flowers are 0.560, 0.556 and 0.5253. The relative order of similarities to ImageNet is
Dogs, Birds, Cars, Aircrafts and Flowers
which aligns well with the transition of optimal momentum value from 0.0 to 0.9. Following the similarity calculation, we can also verified Caltech and Indoor are more close to ImageNet than Cars/Aircrafts/Flowers (Table 3.3).
To verify the connection between momentum and domain similarity, we further fine-tune from different source domains such as Places-365 and iNaturalist, which are known to be better source domains than ImageNet for fine-tuning on Indoor and Birds dataset (Cui et al., 2018). We may expect that fine-tuning from iNaturalist works better for Birds with m = 0 and similarly, Places for Indoor. Indeed, as shown in Table 3, disabling momentum improves the performance when the source and target domain are similar, such as Places for Indoor and iNaturalist for Birds.
Small momentum works better for fine-tuning on domains that are close to the source domain One explanation for the above observations is that because the Dogs dataset is very close to ImageNet, the pre-trained ImageNet model is expected to be close to the fine-tuned solution on the Dogs dataset. In this case, momentum may not help much as the gradient direction around the minimum could be much random and accumulating the momentum direction could be meaningless. Whereas, for
2Stanford Dogs (Khosla et al., 2011) was built using images and annotation from ImageNet for the task of fine-grained image categorization. Caltech-256 has at least 200 categories exist in ImageNet (Deng et al., 2010). Images in the CUB-Birds dataset overlap with images in ImageNet.
3The domain similarity calucation is discussed in Appendix B and the exact value can be found in Table 3.3.
faraway target domains (e.g., Cars and Aircrafts) where the pre-trained ImageNet model could be much different with the fine-tuned solution, the fine-tuning process is more similar with training from scratch, where large momentum stabilizes the decent directions towards the minimum. An illustration of the difference can be found in Figure 2.
Connections to early observations on decreasing momentum Early work (Sutskever et al., 2013) actually pointed out that reducing momentum during the final stage of training allows finer convergence while aggressive momentum would prevent this. They recommended reducing momentum from 0.99 to 0.9 in the last 1000 parameter updates but not disabling it completely. Recent work (Liu et al., 2018; Smith, 2018) showed that a large momentum helps escape from saddle points but can hurt the final convergence within the neighborhood of the optima, implying that momentum should be reduced at the end of training. Liu et al. (2018) find that a larger momentum introduces higher variance of noise and encourages more exploration at the beginning of optimization, and encourages more aggressive exploitation at the end of training. They suggest that at the final stage of the step size annealing, momentum SGD should use a much smaller step size than that of vanilla SGD. When applied to fine-tuning, we can interpret that if the pre-trained model lies in the neighborhood of the optimal solution on the target dataset, the momentum should be small. Our work identifies the empirical evidence of disabling momentum helps final convergence, and fine-tuning on close domains is a good exemplar.
3.3 COUPLED HYPERPARAMETERS AND THE VIEW OF EFFECTIVE LEARNING RATE
Now that we had discovered the effect of momentum by fixing other hyperparameters and only allowed momentum to change. But note that the two difficult scenarios shown in Figure 2 (b) and (c) might also be mitigated by increasing or decreasing learning rate. That is, hyperparameters are coupled and varying one hyperparameter can change the optimal values of the other hyperparameters that lead to the best performance. In addition, optimal values of certain hyperparameters depend on the values of other hyperparameters in systematic ways. For example, learning rate is entangled with batch size, momentum and weight decay. There is a notion of effective learning rate (ELR) (Hertz et al., 1991; Smith et al., 2018; Smith & Le, 2018) for SGD with momentum: η′ = η/(1−m), which was shown to be more closely related with training dynamics and final performance rather than η. The effective learning rate with m = 0.9 is 10× higher than the one with m = 0.0 if other hyperparameters are fixed, which is probably why we see an increase in optimal learning rate when momentum is disabled in Figure 1(b) and Appendix A.
It is the effective learning rate that matters for fine-tuning performance Because hyperparameters are coupled, looking at the performance with only one hyperparameter varied may give a
misleading understanding of the effect of hyperparameters. Therefore, to examine the effect of momentum, we should report the best result obtainable with and without momentum, as long as other hyperparameters explored are sufficiently explored. We re-examine previous experiments that demonstrated the importance of momentum tuning when the ELR η′ = η/(1 − m) is held fixed instead of simply fixing learning rate η. Figure 3 shows that when η′ is constant, the best performance obtained by m = 0.9 and m = 0 are almost equivalent when other hyperparameters are allowed to change. However, different ELR does result in different performance, which indicates its importance for the best performance. It explains why the common practice of changing only learning rate generally works, though changing momentum may results in the same result, they both change the ELR. In fact, as long as the initial learning rate is small enough, we can always search for the optimal momentum as it is an amplifier, making the ELR larger by a factor of 1/(1−m). Therefore, momentum does determine the search ranges of learning rate.
Optimal ELR depends on the similarity between source domain and target domain Now that we have shown ELR is critical for fine-tuning performance, we are interested in the factors that determine the optimal ELR for a given task. Previous work (Smith & Le, 2018) found that there is an optimum ELR which maximizes the test accuracy. However, the observations are only based on scratch training on small datasets (e.g., CIFAR-10); the relationship between ELR and domain similarity, especially for fine-tuning, is still unexplored. To examine this, we search the best ELR on each fine-tuning task and reports in Fig. 4 the best validation error obtained by each ELR while allowing other hyperparameters to change. It shows the optimal ELR depends on both source domain and target domain. As shown in Fig. 4 (a-c), the optimal ELR for Dogs/Caltech/Indoor are much smaller than these for Aircrafts/Flowers/Cars when fine-tuned from ImageNet pre-trained model. Similar observations can be made on DenseNets and MobileNet. Though the optimal ELR value is different, the relative order of domain similarity is consistent and architecture agnostic. We can also see a smaller ELR works better when source domain and target domain are similar, such as Dogs for ImageNet and Birds for iNat2017 (Fig. 4 (a, d-e)). Interestingly, the optimal ELR for training from scratch is much larger and very similar across different target datasets, which indicates the distance from a random initialization is uniformly similar to different target dataset.
10 4 10 3 10 2 10 1 100
10 5 10 4 10 3 10 2 10 1 100
Optimal ELR selection based on domain similarity Now we have made qualitative observations about the relationship between domain similarity and optimal ELR. A quantitative characterization of the relationship could reduce the hyperparameter search ranges for HPO or even eliminate HPO by accurately predicting hyperparameters. We followed the domain similarity calculation in (Cui et al., 2018) and recalculate similarity scores for all source-target domain pairs. Note the original domain similarity calculation in (Cui et al., 2018) use pre-trained JFT (Sun et al., 2017) models as feature extractor, which are not public available. We alternatively use ImageNet pre-trained model or the source model as feature extractor. As shown in Table 4, there is a good correlation between domain similarity score and the scale of optimal ELR. Generally, the more similar the two domains, the smaller the optimal ELR. Though it is not strictly corresponding to the domain similarity score, the score provides reasonable prediction about the scale of optimal ELR, such as [0.001, 0.01], [0.01, 0.1], [0.1, 1.0] and therefore can reduce the search space for optimal ELR. Based on this correlation, a simple strategy can be developed for optimal ELR selection given a frequently used source model: one can calculate domain similarities and perform exhaustive hyperparameter searches for few reference datasets, including similar and dissimilar datasets. Then given a new dataset to fine-tune, one can calculate the domain similarity and compare with the scores of reference datasets, and choose the range of ELRs with the closest domain similarity.
Weight Decay and Learning Rate The relationship between weight decay and effective learning rate is recently well-studied (van Laarhoven, 2017; Zhang et al., 2018; Loshchilov & Hutter, 2018). It was shown that the effect of weight decay on models with BN layers is equivalent to increasing the ELR by shrinking the weights scales, i.e., η′ ∼ η/‖θ‖22. And if the optimal effective learning rate exists, the optimal weight decay value λ is inversely related with the optimal learning rate η. The ‘effective’ weight decay is λ′ = λ/η. We show in Figure 5 that the optimal effective weight decay is also correlated with domain similarity.
10 3 10 2 10 1 100 101 102
3.4 THE CHOICE OF REGULARIZATION
L2 regularization or weight decay is widely used for constraining the model capacity (Hanson & Pratt, 1989; Krogh & Hertz, 1992). Recently Li et al. (2018; 2019) pointed out that standard L2 regularization, which drives the parameters towards the origin, is not adequate in transfer learning. To retain the knowledge learned by the pre-trained model, reference-based regularization was used to regularize the distance between fine-tuned weights and the pre-trained weights, so that the finetuned model is not too different from the initial model. Li et al. (2018) propose L2-SP norm, i.e., λ12 ‖θ ′ − θ0‖22 + λ22 ‖θ ′′‖22, where θ′ refers to the part of network that shared with the source network, and θ′′ refers to the novel part, e.g., the last layer with different number of neurons. While the motivation is intuitive, there are several issues for adopting reference based regularization for fine-tuning:
• Many applications actually adopt fine-tuning on target domains that are quite different from source domain, such as fine-tuning ImageNet models for medical imaging (Mormont et al., 2018; Raghu et al., 2019). The fine-tuned model does not necessarily have to be close with the initial model.
• The scale invariance introduced by Batch Normalization (BN) (Ioffe & Szegedy, 2015) layers enable models with different parameter scales to function the same, i.e., f(θ) = f(αθ). Therefore, when L2 regularization drives ‖θ‖22 towards zeros, it could still have the same functionality as the initial model. On the contrary, a model could still be different even when the L2-SP norm is small.
• L2-SP regularization would constrain θ′′ to be close to θ0, so that ‖θ‖22 is relatively stable in comparison with L2 regularization. Given that ELR is approximately proportional to η/‖θ‖22 and a smaller ELR is beneficial for fine-tuning from similar domains, it may explain why L2-SP provides better performance. If this is true, then by decreasing the initial ELR, L2-norm may function the same.
To examine these conjectures, we revisited the work of (Li et al., 2018) with additional experiments. To show the effectiveness of L2-SP norm, the authors conducted experiments on datasets such as Dogs, Caltech and Indoor, which are all close to the source domain (ImageNet or Places-365). We extend their experiments by fine-tuning on both “similar” and “dissimilar” datasets, including Birds, Cars, Aircrafts and Flowers, with both L2 and L2-SP regularization (details in Appendix D). For fair comparison, we perform the same hyperparameter search for both methods. As expected, Table 5 shows that L2 regularization is very competitive with L2-SP on Birds, Cars, Aircrafts and Flowers, which indicates that reference based regularization may not generalize well for fine-tuning on dissimilar domains.
We also check the change of regularization terms during training for both methods as well as their best hyperparameters. As shown in Figure 6, the L2 regularization usually decrease the weights norm more aggressively, depending on the value of λ, while L2-SP regularization keeps the norm less changed. We can see that the optimal learning rate of L2 regularization is mostly smaller than L2-SP, which may compensate for the decreased weight norm or increased ELR. Interestingly, for Dogs dataset, both regularization terms grow much larger after a few iterations and then become stable, which means constraining the weights to be close to initialization is not necessarily the reason for L2-SP to work even for close domains. It also seems contradicting to previous finding (Zhang et al., 2018) that weight decay functions as increasing ELR by decreasing weight norms. However, it
might be reasonable as large norm actually decreases the ELR, which could be helpful due to the close domain similarity between Dogs and ImageNet.
4 DISCUSSION
The two extreme ways for selecting hyperparameters—performing exhaustive hyperparameter search or taking ad-hoc hyperparameters from scratch training—could be either too computationally expensive or yield inferior performance. Different from training from scratch, where the default hyperparameter setting may work well for random initialization, the choice of hyperparameters for fine-tuning is not only dataset dependent but is also influenced by the similarity between the target and source domains. The rarely tuned momentum value could also improve or impede the performance when the target domain and source domain are close given insufficiently searched learning rate. These observations connect with previous theoretical works on decreasing momentum at the end of training and effective learning rate. We further identify that the optimal effective learning rate correlates with the similarity between the source and target domains. With this understanding, one can significantly reduce the hyperparameter search space. We hope these findings could be one step towards better and efficient hyperparameter selection for fine-tuning.
ACKNOWLEDGMENTS
The authors would like to thank all anonymous reviewers for their valuable feedback.
A THE EFFECTIVENESS OF MOMENTUM
Searching for Optimal Momentum To check the effectiveness of momentum on fine-tuning, we can search the best momentum values for fine-tuning with fixed learning rate but different weight decay and batch size. Taking Birds dataset as an example, Figure 7 provides the convergence curves for the results shown in Figure 1(a), which shows the learning curves of fine-tuning with 6 different batch sizes and weight decay combinations. Zero momentum outperforms the nonzero momentum in 5 out of 6 configurations.
Effective learning rate increases after disabling momentum. Figure 8 compares the performance of with and without momentum for Dogs dataset with a range of different learning rates. Note that the learning rate with similar performance generally increases 10x after changing m from 0.9 to 0.0, which is coherent with the rule of effective learning rate η′ = η/(1−m). Same observations can be made on other datasets as shown in Figure 9.
0 50 100 150 200 250 300 Epochs
0
10
20
30
40
50
60
Er ro
r
resnet101_v2, n = 256, m = 0.9, = 0.0001
= 0.1, 20.69 = 0.05, 19.07 = 0.01, 14.85 = 0.005, 13.42 = 0.001, 12.07 = 0.0005, 11.64 = 0.0001, 14.70
(a) Caltech, m = 0.9 0 50 100 150 200 250 300 Epochs
0
10
20
30
40
50
60
Er ro
r
resnet101_v2, n = 256, m = 0.0, = 0.0001
= 0.1, 14.67 = 0.05, 13.29 = 0.01, 12.11 = 0.005, 11.86 = 0.001, 14.62 = 0.0005, 19.39 = 0.0001, 81.26
(b) Caltech, m = 0.0 0 50 100 150 200 250 300 Epochs
0
10
20
30
40
50
60
Er ro
r
resnet101_v2, n = 256, m = 0.9, = 0.0001
= 0.1, 27.29 = 0.05, 25.64 = 0.01, 23.76 = 0.005, 24.59 = 0.001, 22.34 = 0.0005, 21.29 = 0.0001, 29.39
(c) Indoor, m = 0.9 0 50 100 150 200 250 300 Epochs
0
10
20
30
40
50
60
Er ro
r
resnet101_v2, n = 256, m = 0.0, = 0.0001
= 0.1, 23.46 = 0.05, 22.04 = 0.01, 21.14 = 0.005, 21.96 = 0.001, 29.69 = 0.0005, 41.00 = 0.0001, 88.23
(d) Indoor, m = 0.0
B DOMAIN SIMILARITY
The domain similarity calculation based on Earth Mover Distance (EMD) is introduced in the section 4.1 of (Cui et al., 2018)4. Here we briefly introduce the steps. In (Cui et al., 2018), the authors first train ResNet-101 on the large scale JFT dataset (Sun et al., 2017) and use it as a feature extractor. They extracted features from the penultimate layer of the model for each image of the training set of the source domain and target domain. For ResNet-101, the length of the feature vector is 2048. The features of images belonging to the same category are averaged and g(si) denotes the average feature vector of ith label in source domain S, similarly, g(tj) denotes the average feature vector of jth label in target domain T . The distance between the averaged features of two labels is di,j = ‖g(si) − g(tj)‖. Each label is associated with a weight w ∈ [0, 1] corresponding to the percentage of images with this label in the dataset. So the source domain S with m labels and the target domain T with n labels can be represented as S = {(si, wsi)}mi=1 and T = {(tj , wtj )}ni=1. The EMD between the two domains is defined as
d(S, T ) = EMD(S, T ) = ∑m,n i=1,j=1 fi,jdi,j∑m,n i=1,j=1 fi,j
(3)
where the optimal flow fi,j corresponds to the least amount of total work by solving the EMD optimization problem. The domain similarity is defined as
sim(S, T ) = e−γd(S,T ) (4)
where γ is 0.01. Note that the domain similarity value is not ranging from 0 to 1.
Due to the unavailability of the large-scale JFT dataset (300x larger than ImageNet) and its pre-trained ResNet-101 model, we cannot use it for extracting features for new datasets, such as Caltech256 and
4The extracted features and code are available in https://github.com/richardaecn/ cvpr18-inaturalist-transfer
MIT67-Indoor. Instead of using the powerful feature representation, we use our pre-trained ImageNet model (ResNet-101) as the feature extractor. Table 4 compares the domain similarities calculated by different pre-trained models and we can see some consistent patterns across different architectures: e.g., The 1st and 2nd highest similarity scores are Caltech and Dogs regardless of architectures; the 3rd and 4th highest similarity scores refers to Birds and Indoor; the most dissimilar datasets are Cars, Aircrafts and Flowers, though the relative orders for them are not exactly the same. Besides using fixed feature extractor, an alternative way is to use the source domain model directly as the feature extractor for both source domain and target domain, which may captures the transfer learning process more precisely than a uniform feature extractor.
C THE EFFECTIVENESS OF BN MOMENTUM
Kornblith et al. (2019) conducted extensive fine-tuning experiments with different hyperparameters. One observation they made is that the momentum parameter of BN layer is essential for finetuning. They found it useful to decrease the BN momentum parameter from its ImageNet value to max(1− 10/s, 0.9) where s is the number of steps per epoch. This will change the default BN momentum value (0.9) when s is larger than 100, but it only applies when the dataset size is larger than 25.6K with batch size 256. The maximum data size used in our experiments is Caltech-256, which is 15K, so this strategy seems not applicable.
We further validate the effect of BN momentum by performing a similar study as to ELR. The goal is to identify whether there is an optimal BN momentum for a given task. For each dataset, we fine-tune the pre-trained model using previously obtained best hyperparameters and only vary BN momentum. In addition to the default value 0.9, we also set it to 0.0, 0.95 and 0.99. The rational is that if BN mommentum is a critical hyperparameter, we should expect significant performance differences when the value is changed from the optimal value. As shown in Figure 10, we can see mbn = 0.99 slightly improves the performance for some datasets, however, there is no significant performance difference among values greater than 0.9. One hypothesis is that similar domains will share similar BN parameters and statistics, BN momentum may affect the parameter adaptation. More investigation is still needed to fully understand its effectiveness.
D EXPERIMENTAL SETTINGS FOR COMPARISON OF L2 AND L2-SP
The experiments in Section 3.4 is based the code5 provided by (Li et al., 2018). The base network is ImageNet pretrained ResNet-101-V1. The model is fine-tuned with batch size 64 for 9000 iterations, and learning rate is decayed once at iteration 6000. Following the original setting, we use momentum 0.9. We performed grid search on learning rate and weight decay, with the range of η : {0.02, 0.01, 0.005, 0.001, 0.0001} and λ1 : {0.1, 0.01, 0.001, 0.0001}, and report the best average class error (1 - average accuracy) for both methods. For L2-SP norm, we follow the authors’ setting to use constant λ2 = 0.01. Different with the original setting for L2 regularization, we set λ2 = λ1 to simulate normal L2-norm.
5 https://github.com/holyseven/TransferLearningClassification
E DATA AUGMENTATION
Data augmentation is an important way of increasing data quantity and diversity to make models more robust. It is even critical for transfer learning with few instances. The effect of data augmentation can be viewed as a regularization and the choice of data augmentation can be also viewed as a hyperparameter. Most current widely used data augmentation methods have verified their effectiveness on training ImageNet models, such as random mirror flipping, random rescaled cropping6, color jittering and etc (Szegedy et al., 2015; Xie et al., 2018).
Do these methods transfer for fine-tuning on other datasets? Here we compare three settings for data augmentation with different momentum settings: 1) random resized cropping: our default data augmentation; 2) random cropping: the same as standard data augmentation except that we use random cropping with fixed size; 3) random flip: simply random horizontal flipping. The training and validation errors of fine-tuning with different data augmentation strategies and hyperparameters are shown in Figure 11 and Figure 12.
The effect of data augmentation is dataset dependent and is also influenced by other hyperparameters The first row in Figure 11 shows that advanced data augmentation with default hyperparameters (m = 0.9 and η = 0.01) leads to overfitting for Dogs while generalize better on Aircrafts and Flowers. Similar observations can be made in Figure 12. However, when momentum is disabled, the overfitting disappears for Dogs and Caltech. This is explainable since random resized cropping adds more variance to the gradient direction, and disabling momentum will lead to a smaller ELR which will be helpful for fine-tuning from a similar domain. On the other hand, the performance of random cropping decreases when momentum is disabled. As random cropping produces training samples with less variation than random resized cropping, disabling momentum or decreasing the ELR might lead to underfitting or stucking in poor local minima. This can be mitigated as we increase the learning rate for random cropping, which adds variation to the gradients. As shown in Table 6,
6Randomly crop a rectangular region with aspect ratio randomly sampled in [3/4, 4/3] and area randomly sampled in [8%, 100%] (Szegedy et al., 2015)
0 50 100 150 200 250 300 Epochs
0
5
10
15
20
25
30
35 40 Er ro r
resnet101_v2, = 0.01, n = 256, m = 0.9, = 0.0001
rand resized crop, 14.85 rand crop, 12.42 rand flip, 12.34
(a) Caltech, m = 0.9 0 50 100 150 200 250 300 Epochs
0
5
10
15
20
25
30
35
40
Er ro
r
resnet101_v2, = 0.01, n = 256, m = 0.0, = 0.0001
rand resized crop, 12.11 rand crop, 12.89
(b) Caltech, m = 0.0 0 50 100 150 200 250 300 Epochs
0
5
10
15
20
25
30
35
40
Er ro
r
resnet101_v2, = 0.01, n = 256, m = 0.9, = 0.0001
rand resized crop, 23.76 rand crop, 23.39 rand flip, 23.31
(c) Indoor, m = 0.9 0 50 100 150 200 250 300 Epochs
0
5
10
15
20
25
30
35
40
Er ro
r
resnet101_v2, = 0.01, n = 256, m = 0.0, = 0.0001
rand resized crop, 21.14 rand crop, 25.19
(d) Indoor, m = 0.0
when learning rate increased fro 0.01 to 0.05, disabling momentum shows better performance than nonzero momentum on datasets that are close, similar to previous findings with random resized cropping.
F SOURCE DOMAINS
Pre-trained models For most of our experiments, we use the pre-trained ResNet-101_v2 model from the model zoo of MXNet GluonCV 7. To get the pre-trained models for iNat-2017 and Places365, we fine-tune from the ImageNet pre-trained model with the default fine-tuning hyperparameters for 60 epochs, where learning rate is decayed at epoch 45 by a factor of 10. Table 7 illustrates the Top-1 errors of each pre-trained model on their validation sets.
Training from Scratch with HPO The default hyperparameters for training from scratch are η = 0.1, λ = 0.0001, m = 0.9 and n = 256. We train 600 epochs, and decay the learning rate at epoch 400 and 550 by a factor of 10. To perform Hyperparameter Optimization (HPO), we search hyperparameters in the following space: η ∈ [0.1, 0.2, 0.5] and λ ∈ [0.0001, 0.0005]. Figure 13 shows the training/validation errors of training from scratch on each dataset with different learning rate and weight decay. We observe weight decay 0.0005 consistently performs better than 0.0001.
Insufficient hyperparameter search may lead to miss-leading conclusion To show the importance of hyperparameter tuning, Table 8 compares the performance with and without hyperparameter tuning for both fine-tuning and training from scratch tasks. With the default hyperparameters, some inappropriate conclusions might be made, e.g., “there is significant gap between fine-tuning and training from scratch", “fine-tuning always surpass training from scratch" or “fine-tuning from iNat cannot beat the performance of ImageNet". However, with HPO, those statements may not be valid. For example, training from scratch surpass the default fine-tuning result on Cars and Aircrafts and the gap between fine-tuning and training from scratch is much smaller. Previous studies (Kornblith et al., 2019; Cui et al., 2018) also identified that datasets like Cars and Aircrafts do not benefit too much from fine-tuning. | 1. What are the main contributions and findings of the paper regarding hyperparameter tuning?
2. How does the paper challenge common beliefs about fine-tuning and regularization?
3. What is the relationship between momentum and domain similarity, and how does it impact the choice of hyperparameters?
4. How do the authors explain the phenomenon where L_2-SP regularization does not always perform better than L_2?
5. What is the reviewer's concern regarding the method used to compare the similarity between datasets? | Review | Review
This paper provides extensive experimental results to investigate the influence of hyper-parameters on fine-tuning and challenges several commonly-held beliefs. The hyper-parameters of training from scratch does not always perform well when applied to fine-tuning. Furthermore, current L_2-SP regularization is not necessarily helpful when the domain discrepancy is large.
The authors discover that the optimal momentum value is closely related to domain similarity. For similar target datasets, 0 momentum is a better choice than 0.9, since it potentially allows better convergence. Similar to training from scratch, the actual effect at play is the effective learning rate and ‘effective’weight decay. This further involves the coupling of hyper-parameters.
Different from the commonly-held belief, the L_2-SP regularization does not always perform better than L_2. When domain discrepancy is large, the regularization effect will be worsened.
This paper is well-written and makes several interesting discoveries. My question for the authors is as follows:
In the momentum section, the authors postulate that for more similar target datasets, smaller momentum performs better. Here, the similarity is quantified by EM distance defined in the feature space. However, for the five datasets provided, the similarity of them are really close, making this claim less convincing. The conclusion is reasonable, but the authors may need a more reliable method to compare the similarity between datasets. |
ICLR | Title
Piecing and Chipping: An effective solution for the information-erasing view generation in Self-supervised Learning
Abstract
In self-supervised learning frameworks, deep networks are optimized to align different views of an instance that contains the similar visual semantic information. The views are generated by conducting series of data augmentation to the anchor samples. Although the data augmentation operations are often designed to be aggressive and extensive to lower the mutual information between views, the family of Information-Erasing data augmentation that masks out region of images is barely considered. In this work, we propose the Piecing and Chipping enhanced Erasing Augmentation (PCEA) approach to making the self-supervised learning algorithms benefit from the effectiveness of Information-Erasing data augmentation. Specifically, we design a pipeline to generate mutually weakly related transformed views using random erasing and build corresponding loss terms to take advantage of these views. Extensive experiments demonstrate the effectiveness of our method. Particularly, applying our PCEA to MoCo v2 improves the baseline by 12.84%, 3.3% in terms of linear classification on ImageNet-100 and ImageNet-1K.
1 INTRODUCTION
The deep convolutional neural networks (CNNs) (Krizhevsky et al., 2012) have a great success in computer vision tasks, and in recent years, self-supervised learning (Oord et al., 2018; Chen et al., 2020b;c; He et al., 2020; Li et al., 2021; Zbontar et al., 2021; Grill et al., 2020; Chuang et al., 2020; Hu et al., 2020; Kim et al., 2020; Zhu et al., 2020; Caron et al., 2020; Xiao et al., 2020; Kalantidis et al., 2020) also achieve a great success and gained attentions because of its ability of reducing the labor cost on large-scale dataset annotation. Self-supervised learning aims at learning some forms of image representations by figuring out a pattern that can explain the image reasonably. The learned pattern can be used in downstream tasks, such as image classification, object detection, segmentation and etc. The self-supervised learning can be achieved majorly in two different styles: contrastive (Chen et al., 2020b;c; He et al., 2020; Chuang et al., 2020) and non-contrastive (Li et al., 2021; Zbontar et al., 2021; Grill et al., 2020) (though the detailed taxonomy of self-supervised learning is the topic of this study). The key component of both styles is the generation of views of the anchor sample.
The term “view” in self-supervised learning is roughly grounded as “augmented or transformed samples that maintain semantically similar information to the anchor sample”. In the computer vision tasks, the generation of views is accomplished a series of domain transformation operations, e.g. ColorJitter, RandomGrayscale, GaussianBlur. Former literature has examined the influence of the adaptation of different types of transformation. In these works, the composition of transformation operations is considered as the crucial part for learning good representations (Chen et al., 2020b). And the proper approach to reduce the mutual information between views while keeping task-relevant information intact (Tian et al., 2020).
One family of data augmentation that is commonly employed in computer vision tasks is “informationerasing”. By which, we refer to the methods that mask small regions of an image, such that the information concerning the objects in the image is erased (DeVries & Taylor, 2017; Yun et al., 2019; French et al., 2020; Singh & Lee, 2017; Chen et al., 2020a). However, this family of data augmentation is barely seen in self-supervised learning algorithms. While in (Chen et al., 2020b), the researchers also denote the Cutout (DeVries & Taylor, 2017) as an unflavored augmentation method to generate
views. We conjecture the primary reason for the inferior performance of the information-erasing family in the self-supervised learning algorithms is the inconsistency in preserving the task-relevant information. At the same time, information-erasing methods do not contribute to the reduction of the mutual information between views and anchor images in the non-masked regions. As a consequence, the generated views could be valueless for feature extractors to learning semantically meaningful representations.
In this work, we tackle the aforementioned drawbacks of inconsistency and mutual information reduction. We build an approach with the simple random erasing method to provide stable views with high qualities to improve the performance of self-supervised learning algorithms. We refer to our approach as Piecing and Chipping enhanced Erasing Augmentation (PCEA), which is built upon four motivations:
1. Multiple instances of erasing augmented images are generated and pieced, and we chip the larger image irregularly such that views would be weakly related by acquiring peripheral patches from other views;
2. We resize the irregularly chipped views without preserving the aspect ratio to reduce mutual information in the non-mask regions;
3. We feed more than one view (two in this work) to the “positive pair” loss head of the self-supervise algorithms to lessen the inconsistency brought by random selection of masked regions.
4. Considering the above approach for the view generation, we also regularize the predicted similarity between these views. Thus, we could largely prevent the non-task-relevant information from being memorized.
In simple terms, we spawn weakly related child-views that are similar to their parent-view while being considerably different from each other. The overall approach is shown in Figure 1 and Figure 2. The dark and light blue spots denote the negatives samples from different images. The red (k) and green ones (q1 and q2) indicate the positive pairs. The proposed method aims to enlarge the margin between the blue/non-blue spots and the distance between the spots in green color (q1 and q2). For positives in red and green, the margin between the red (k) and each green spot (q1 or q2) is narrowed.
In our experimental analysis, we firstly compare the effectiveness of the proposed approach with other information-erasing family data augmentation. We keep the comparison fair by offering multiple child-views for all the augmentation methods as demonstrating in Figure 1. We show that the piecingand-chipping-based random erasing augmentation out-performs other well-designed augmentation methods by a large margin. We also conduct experiments compared with other state-of-the-art self-supervised learning algorithms. Specifically, we employ MoCo v2 (Chen et al., 2020d) as our backbone and modify its view generation codes with the proposed PCEA. We then achieve a
competitive performance on the linear-probe classification task using the ImageNet-1K datasets. Overall, the main contributions of this work can be summarized as follows:
• We propose Piecing and Chipping enhanced Erasing Augmentation (PCEA), a novel data augmentation approach for the view generation in self-supervised learning algorithms.
• The proposed PCEA data augmentation approach also offers a novel method of utilizing multiply child-views. The method not only reduces the inconsistency in the view generation process but also regularizes the utilization of non-task-relevant information during the self-supervised learning progress.
• We conduct extensive experiments to demonstrate the effectiveness of our method. To the best of our knowledge, this is the first successful attempt in involving the InformationErasing family data augmentation in self-supervised learning algorithms.
2 RELATED WORK
2.1 SELF-SUPERVISED LEARNING
A wide range of self-supervised learning algorithms has been proposed to improve the quality of learned representations. Recent self-supervised learning algorithms can be divided into two categories: Non-contrastive ones that employ positive pairs of sample; Contrastive ones that employ negative pairs of samples. Here the terms positive/negative do not strictly refer to pairs of sample with similar/different semantic information, but pairs of views generated from the same or different anchor samples. In the family of non-contrastive self-supervised learning, BYOL (Grill et al., 2020) achieves an outstanding performance, which relies on two neural networks to represent the visual semantic information; the online and target network interact and learn from each other. SimSiam architecture (Chen & He, 2020) aims at enlarging the similarity between the two augmented views of one image with a shared encoder network. On the other hand, typical contrastive self-supervised learning applies multi-layer perceptions and stop-gradient tricks in case of collapsing (Chen et al., 2020b). To reduce the memory cost of large amount of negative samples, MoCo (He et al., 2020) proposes a momentum memory bank to record negative samples of previous steps. SWAV (Caron et al., 2020) is an online algorithm, which improves the contrastive method without the pairwise comparison. An online clustering loss is constructed, and a multi-crop strategy is introduced to increase the number of views without the extra computational overhead. In this study, we employ both of the self-supervised learning algorithm families to verify the effectiveness and efficiency of our proposed method.
2.2 DATA AUGMENTATION IN SELF-SUPERVISED LEARNING
Data augmentation in vanilla computer vision tasks helps to improve performance by increasing the amount of training data. Specifically, in practical implementation, this technology helps the model find the indistinguishable features in the image, that can reduce the over-fitting of the model like a regularizer. However, in the scenarios of self-supervised learning, the data augmentation plays a much different role. In the SimCLR paper (Chen et al., 2020b), the author carefully examine the effects of different data augmentation w.r.t. the downstream classification tasks. In their conclusion, the Gaussian blur for the input images and a stronger color distortion act as critical roles in obtaining an effective predicted result. SimCLR has experimentally demonstrated that the ImageNet linear classification accuracy at Top-1 is increased from 59.6% to 63.2% by stronger color distortion strength. This conclusion is further confirmed in Chen et al. (2020c), which shows that the accuracy of MoCo v1 with extra blur augmentation is increased by 2.8% to 63.4%. Furthermore, Tian et al. (2020) argues the proper data augmentation should reduce the mutual information between views while keeping task-relevant information, and develops the more aggressive info-min data augmentation approach. However, we consider the regular induced data augmentations are still limited in the desire of fully using the semantic information of visual representation in self-supervised learning. In this work, we focus on the family of data augmentation that masks out semantic information straightforwardly.
2.3 INFORMATION ERASING DATA AUGMENTATION
In paper (Noroozi & Favaro, 2016), a puzzle-based data augmentation method is developed with an unsupervised visual representation manner, which builds a CNN to solve Jigsaw puzzles as a pretext
task for enhancing classification and detection performance. In paper (DeVries & Taylor, 2017), a method named “CutOut” is designed for the objective classification task, which randomly masks square regions of training images and tries to find out less prominent features. These two methods can be regarded as early explorations of advanced data augmentation for object classification and detection-related tasks. With their convenience and efficiency, these two methods reached the highest level of computer vision-related tasks at that time and profoundly influenced other methods. However, this type of single splicing and deletion of images or image parts also limits the performance of the models.
In the object localization area, a weakly supervised framework named “Hide-and-Seek” is proposed in the paper (Singh & Lee, 2017), which randomly hides patches of the images and enhances the model. In this method, not only the most discriminative part of the image can be identified, but other parts with weak discriminative can also be identified. Through the overall organization of each part in the image, the discriminative performance of the model is improved. Another method, MixUp is designed in paper (Zhang et al., 2017), which aims to provide an image data augmentation idea with a convex combination of the training data. With the state-of-art performance in several tasks such as ImageNet2021, CIFAR-10, and CIFAR-100, the method Mixup inspires a potential clue for unsupervised, semi-supervised, and reinforcement learning. Different from the traditional regional dropout or patch removal methods, a CutMix data augmentation method is proposed in paper (Yun et al., 2019), which cuts patches and pasted them among training images with ground truth labels to enhance the reliability and stability of the model. These three methods provide new ideas for data augmentation, and the methods based on them also archived the highest level at that time. However, these methods still do not completely get rid of the relatively inflexible processing methods for images or patches, such as the proportion of the original image and the shape of the patches, which also restricts the performance of the model.
Based on these studies above, a regional dropout strategy is designed as GridMask in paper (Chen et al., 2020a), which provides a controllable method to delete patches of a training image. Compared with previous methods, this structured information dropping method is more effective and avoids random information dropping. At the same time, to overcome the shortages of squared patches in previous studies, a Gaussian filter-based data augmentation method “Milking CowMask” is proposed in paper (French et al., 2020). This method provides more flexibly shaped masks according to turnable parameters in Gaussian filter with fewer correlations and reaches a new state-of-art performance in related tasks. However, these methods discussed above focus on increasing the discriminability of samples in the entire dataset, which results in limited performance in the self-supervised learning cases.
3 METHODOLOGY
3.1 PCEA: PIECING AND CHIPPING ENHANCED ERASING AUGMENTATION
In this section, we first introduce how the views are generated in the proposed Piecing and Chipping enhanced Erasing Augmentation (PCEA) method. The overall approach is depicted in Figure 2.
We refer 2 pipelines of image transformation operations as T1 and T2. T2 is an ordinary adopted data augmentation method used in state-of-the-art self-supervised learning algorithms (in this paper, we employ the data augmentation strategy in MoCov2 (Chen et al., 2020d)). T1 is based on T2, with an additional masking operation (in this paper, we employ random erasing). The PCEA method is described as follows:
• Step 1: For each image x ∈ Rw×h×c1, we generate views x_v1,2,3,4 and x_k using T1 and T2, respectively. The x_v1,2,3,4 are denoted as “child-views”, while x_k is denoted as the “parent-view”.
• Step 2: We piece the 4 different child-views x_v1,2,3,4 (224×224) to obtain a larger image (448×448). This newly generated image is considered as an alternative image to obtain more positive samples with more substantial semantic information.
1For the rest of the paper, we let the w, h = 224 and omit the channel notation c, for sake of good readability.
• Step 3: We locate a candidate region (green rectangle in the figure) at the centroid of the newly generated image. 2 We then (uniformly) randomly select a segmentation point in the candidate region, and chip the image vertically and horizontally. Thus we obtain a new set of child-views x_q1,2,3,4.
• Step 4: The set of new child-views are resized to their original size (224×224), without preserving the aspect ration. We finally select 2 child-views as the new positive pairs of their parent-view x_k.
3.2 SIMILARITY REGULARIZATION LOSS
Although xq1 and xq2 can still be roughly judged as identical by human beings, the randomness of in choosing the erasing and the change of aspect ratio develop a considerable margin between the semantic information of visual representation in xq1 and xq2 . Meanwhile, the ordinary InfoNCE loss aligns both child-views to their parent view. To prevent the deep model implicitly aligns the child-views, we put an additional Similarity Regularization (SimReg) loss term to attain explicit discrimination between them. This loss term is implemented with a simple cosine similarity between the embedded representations of the child-views. According to this, the loss between q1 and q2 is defined as (1).
LSimReg = q1·q2
max(‖q1‖2, ‖q2‖2) (1)
In the experimental analysis, we find that the loss term is insensitive to the loss weight (so-called λ in many literature) empirically. Therefore, we leave the loss weight hyper-parameter to be 1.0 in all experimental configurations.
2Here we set the size of candidate region to be same as the ‘child-views’ (224×224), more details are discussed in the ablation study.
4 EXPERIMENTS
4.1 DATASETS & EXPERIMENTAL CONFIGURATIONS
Datasets. In this work, we conduct experiments on ImageNet ILSVRC-2021 dataset (Deng et al., 2009) with 1.28 million images in 1000 categories (ImageNet-1K) and a subset of images in 100 categories (ImageNet-100), which have been widely utilized as benchmark datasets (Tian et al., 2019; He et al., 2020; Grill et al., 2020; Hu et al., 2020). We also construct a more difficult subset of the original ImageNet-1K dataset, named Small-ImageNet-1000 (S-ImageNet-1K). S-ImageNet-1K only selects 10 percent of the images from each of the categories, which aims at reducing the richness of visual representations while maintaining the same representation distribution as the original ImageNet-1K. The evaluation is carried out by training a linear probe for the classification task, while keeping the weights of feature extractor frozen. For ImageNet-1K and ImageNet-100, we employ the commonly adopted classification accuracy as the evaluation metric. For S-ImageNet-1K, we employ the average correct classification rate among all 1000 categories proposed in Le & Yang (2015) as our evaluation metric.
In addition, we also employ the widely acknowledged MsCOCO (Lin et al., 2014) to verify the proposed PCEA with the object detection task. We fine-tune the ImageNet-1K pre-trained backbone models using the train2017 split, and perform evaluation on the val2017 split.
Configurations. We employ the vanilla ResNet-50 (He et al., 2016) equipped with an global average pooling on its head as our backbone architecture. We employ the projection head with only one linear layer for encoding fθ and fε. The feature dimensions of the output of ResNet-50 pooling layer and the embedding vector are 2048 and 128, respectively. For other hyper-parameters, we keep the same configuration as in MoCo v2 (Chen et al., 2020d) and SimSiam (Chen & He, 2021). In the MoCo v2 algorithm, the augmented views x_q1 and x_q2 are fed into the encoder network fθ with back-propagation. Meanwhile, x_k is represented as k = fε(x_k) without back-propagation, where fε(·) denotes the momentum encoder. In the SimSiam algorithm, we simply average the similarity between multiply child-views and the parent views. For the detection task, we adopt the commonly used Faster-RCNN with ResNet-50 as the baseline architecture.
Training. During training, a mini-batch size of 256 is used in 8 GPUs (Tesla V100 16G), and the initial learning rate is defined as 0.03. SGD (Loshchilov & Hutter, 2016) is used as the optimizer, the weight decay and the momentum update parameter is defined as 0.0001 and 0.9. 200/100 epochs are trained with a cosine learning rate decay for MoCo v2 and SimSiam, respectively. The number of negative samples in momentum queue and the sliding queue are 65536 and 32768, respectively. The temperature is set as 0.2.
4.2 EXPERIMENTAL RESULTS
ImageNet-100. Following previous work (Chen et al., 2020d; Chen & He, 2021), we evaluate nine data augmentation methods on MoCo v2 (Chen et al., 2020d) and SimSiam (Chen & He, 2021), where linear classifier are trained on frozen features from these methods. The comparison results are reported in Table 1. As can be seen, applying PCEA to MoCo v2 with negative samples involved achieves the best performance against baselines using other data augmentation methods. Particularly, our PCEA outperforms the vanilla baseline by 12.84% and 3.85% in terms of top-1 and top-5 accuracy. This demonstrates the effectiveness of our PCEA in learning discriminative representations by treating the positive and negative samples separately. We can also observe that SimSiam (Chen & He, 2021) with our PCEA achieves superior performance on the ImageNet-100 dataset against previous data augmentations, which further validates the generalizability of our PCEA to existing contrastive self-supervised methods.
ImageNet-1K. Furthermore, we compare our PCEA with existing state-of-the-art self-supervised methods under the linear classification setting in Table 2. From the results, we can observe that our PCEA outperforms MoCo v2, the vanilla baseline by a large margin, i.e., 3.3% in terms of top-1 accuracy. Meanwhile, we also achieve competitive results with previous methods in terms of top-1 and top-5 accuracy, which further demonstrates the advantage of our PCEA over baselines under the same linear classification setting.
S-ImageNet-1K. Table 3 reports the comparison results of linear classification on our S-ImageNet-1K dataset, a smaller dataset with the same distribution of the original ImageNet-1K, but lacks richness in visual representations. The proposed PCEA with MoCo v2 out-performs its baseline algorithm with a large margin (15.0%) in terms of top-1 accuracy. This superior performance validates the effectiveness and efficiency of PCEA in difficult configurations.
MsCOCO. Table 4 reports the detection performance (mAP) on the Ms COCO datasets. The proposed PCEA with MoCo v2 achieve the best result compared to state-of-the-art self-supervisely pre-trained backbones. Specifically, it outperforms its baseline MoCo v2 by 1.1% and supervisely trained model by 4.3%.
5 ABLATION STUDY
In this section, we conduct extensive ablation studies to explore how each step of our PCEA and the size of candidate region affect the final performance of our approach. Unless specified, we perform the experiments on ImageNet-100 dataset.
5.1 ABLATION ON EACH STEP OF PCEA
In order to explore the effect of each step of our PCEA on the final performance, we ablate each step and report the experimental results in Table 5. The methods with different steps are analysed, which describe the effectiveness of each steps in the Mosaic process. The top-1 accuracy on ImageNet-100 with the same data augmentation processing as MoCo v2 (Chen et al., 2020d) is 81.65% with two inputs(k and q). After that, the result benefits an promotion with two inputs as x_q1 and x_q2 in step 1, which increases the performance by 5.5%. As for the combination of step 2 and step3, we
use padding and random crop to modify the output images instead of resizing the splitted images into 224*224, which achieves a higher accuracy as 90.76%. Adding step 4 to previous three steps boosts the top-1 and top-5 accuracy to 94.49% and 99.62%, which indeed validates the rationality of interpolation in our PCEA to capture the fine-grained instance features.
5.2 ABLATION ON THE SIZE OF CANDIDATE REGION
To analyze how the size of the candidate region affect the final performance of our PCEA, we vary the size from 28, 56, 112, 224, 336, 448. The comparison results are reported in Table 6. As can be seen, our PCEA with the size of 224×224 achieves the best performance compared to other size settings. With the increase of the size of the candidate region, the performance of our PCEA degrades a lot, which could be caused by more background information introduced in the selected region. In the meanwhile, when the size of the candidate region is decreased to 112×112, our PCEA performs worse than the best result in terms of top-1 and top-5 accuracy. This further shows the importance of choosing the right size of the candidate region to learn more discriminative representations during pre-training.
5.3 ABLATION ON NUMBER OF VIEWS IN LOSS TERMS
We modify the number of child-views participated in the self-supervise learning loss (InfoNCE for MoCo, CosSim for SimSiam). The loss terms are duplicated and averaged according to the number child-views. We also conduct experiments on the effects of the SimReg loss term. Table 7 reports these results on both S-ImageNet-1K and ImageNet-1K. It can be seen that, two child-views achieves the best performance among different configurations. On the other hand, the SimReg loss functions overwhelmingly in the difficult S-ImageNet-1K dataset.
6 CONCLUSION
In this work, we propose Piecing and Chipping enhanced Erasing Augmentation (PCEA), a novel approach to employ information-erasing family of data augmentation methods in self-supervised learning scenarios. We exploit eight existing information-erasing data augmentation over previous methods on commonly-used benchmark datasets. We also equip the PCEA on 2 popular selfsupervised learning baseline algorithm. Both results prove that the effectiveness and efficiency of the proposed PCEA approach. We believe the involvement of information-erasing family of data augmentation has a border impact on further developing of self-supervised learning algorithm. | 1. What is the focus and contribution of the paper on self-supervised learning?
2. What are the strengths of the proposed approach, particularly in its performance improvement and compatibility with existing methods?
3. What are the weaknesses of the paper regarding the missing ablation study and the design motivation of the augmentation pipeline?
4. Do you have any questions regarding the compatibility of the proposed method with other info-erasing augmentations and clustering-based self-supervised methods?
5. What are the similarity scores of child views trained with and without SimReg loss? | Summary Of The Paper
Review | Summary Of The Paper
This paper proposed a method, PCEA, that makes self-supervised methods benefit from info-erasing data augmentations, which are shown to hurt the performance in previous works. The authors also propose a SimReg loss to prevent multiple child views from collapsing into one single representation.
When combined with existing self-supervised learning methods MoCo v2 and SimSiam, the proposed method outperforms other info-erasing augmentations. When combined with MoCo v2, it outperforms other self-supervised methods.
Review
Strength
substantial performance improvement compared to other info-erasing augmentations (Tab1)
competitive performance on various self-supervised benchmarks when combined with MoCo-v2
Weakness
Since the augmentation pipeline contains many extra steps, the review thinks one important piece of ablation is missing: the same augmentation pipeline without erasing. The performance improvement might come from the piecing and chipping or from the multiple child-views, not necessarily from the info-erasing augmentation.
Is the proposed method compatible with other info-erasing augmentations? For example, those listed in Sec 2.3.
Is the proposed method compatible with clustering-based self-supervised methods such as SwAV?
The design motivation of each step in the augmentation pipeline is not well discussed.
what are the similarity scores of child views trained with and without SimReg loss? |
ICLR | Title
Piecing and Chipping: An effective solution for the information-erasing view generation in Self-supervised Learning
Abstract
In self-supervised learning frameworks, deep networks are optimized to align different views of an instance that contains the similar visual semantic information. The views are generated by conducting series of data augmentation to the anchor samples. Although the data augmentation operations are often designed to be aggressive and extensive to lower the mutual information between views, the family of Information-Erasing data augmentation that masks out region of images is barely considered. In this work, we propose the Piecing and Chipping enhanced Erasing Augmentation (PCEA) approach to making the self-supervised learning algorithms benefit from the effectiveness of Information-Erasing data augmentation. Specifically, we design a pipeline to generate mutually weakly related transformed views using random erasing and build corresponding loss terms to take advantage of these views. Extensive experiments demonstrate the effectiveness of our method. Particularly, applying our PCEA to MoCo v2 improves the baseline by 12.84%, 3.3% in terms of linear classification on ImageNet-100 and ImageNet-1K.
1 INTRODUCTION
The deep convolutional neural networks (CNNs) (Krizhevsky et al., 2012) have a great success in computer vision tasks, and in recent years, self-supervised learning (Oord et al., 2018; Chen et al., 2020b;c; He et al., 2020; Li et al., 2021; Zbontar et al., 2021; Grill et al., 2020; Chuang et al., 2020; Hu et al., 2020; Kim et al., 2020; Zhu et al., 2020; Caron et al., 2020; Xiao et al., 2020; Kalantidis et al., 2020) also achieve a great success and gained attentions because of its ability of reducing the labor cost on large-scale dataset annotation. Self-supervised learning aims at learning some forms of image representations by figuring out a pattern that can explain the image reasonably. The learned pattern can be used in downstream tasks, such as image classification, object detection, segmentation and etc. The self-supervised learning can be achieved majorly in two different styles: contrastive (Chen et al., 2020b;c; He et al., 2020; Chuang et al., 2020) and non-contrastive (Li et al., 2021; Zbontar et al., 2021; Grill et al., 2020) (though the detailed taxonomy of self-supervised learning is the topic of this study). The key component of both styles is the generation of views of the anchor sample.
The term “view” in self-supervised learning is roughly grounded as “augmented or transformed samples that maintain semantically similar information to the anchor sample”. In the computer vision tasks, the generation of views is accomplished a series of domain transformation operations, e.g. ColorJitter, RandomGrayscale, GaussianBlur. Former literature has examined the influence of the adaptation of different types of transformation. In these works, the composition of transformation operations is considered as the crucial part for learning good representations (Chen et al., 2020b). And the proper approach to reduce the mutual information between views while keeping task-relevant information intact (Tian et al., 2020).
One family of data augmentation that is commonly employed in computer vision tasks is “informationerasing”. By which, we refer to the methods that mask small regions of an image, such that the information concerning the objects in the image is erased (DeVries & Taylor, 2017; Yun et al., 2019; French et al., 2020; Singh & Lee, 2017; Chen et al., 2020a). However, this family of data augmentation is barely seen in self-supervised learning algorithms. While in (Chen et al., 2020b), the researchers also denote the Cutout (DeVries & Taylor, 2017) as an unflavored augmentation method to generate
views. We conjecture the primary reason for the inferior performance of the information-erasing family in the self-supervised learning algorithms is the inconsistency in preserving the task-relevant information. At the same time, information-erasing methods do not contribute to the reduction of the mutual information between views and anchor images in the non-masked regions. As a consequence, the generated views could be valueless for feature extractors to learning semantically meaningful representations.
In this work, we tackle the aforementioned drawbacks of inconsistency and mutual information reduction. We build an approach with the simple random erasing method to provide stable views with high qualities to improve the performance of self-supervised learning algorithms. We refer to our approach as Piecing and Chipping enhanced Erasing Augmentation (PCEA), which is built upon four motivations:
1. Multiple instances of erasing augmented images are generated and pieced, and we chip the larger image irregularly such that views would be weakly related by acquiring peripheral patches from other views;
2. We resize the irregularly chipped views without preserving the aspect ratio to reduce mutual information in the non-mask regions;
3. We feed more than one view (two in this work) to the “positive pair” loss head of the self-supervise algorithms to lessen the inconsistency brought by random selection of masked regions.
4. Considering the above approach for the view generation, we also regularize the predicted similarity between these views. Thus, we could largely prevent the non-task-relevant information from being memorized.
In simple terms, we spawn weakly related child-views that are similar to their parent-view while being considerably different from each other. The overall approach is shown in Figure 1 and Figure 2. The dark and light blue spots denote the negatives samples from different images. The red (k) and green ones (q1 and q2) indicate the positive pairs. The proposed method aims to enlarge the margin between the blue/non-blue spots and the distance between the spots in green color (q1 and q2). For positives in red and green, the margin between the red (k) and each green spot (q1 or q2) is narrowed.
In our experimental analysis, we firstly compare the effectiveness of the proposed approach with other information-erasing family data augmentation. We keep the comparison fair by offering multiple child-views for all the augmentation methods as demonstrating in Figure 1. We show that the piecingand-chipping-based random erasing augmentation out-performs other well-designed augmentation methods by a large margin. We also conduct experiments compared with other state-of-the-art self-supervised learning algorithms. Specifically, we employ MoCo v2 (Chen et al., 2020d) as our backbone and modify its view generation codes with the proposed PCEA. We then achieve a
competitive performance on the linear-probe classification task using the ImageNet-1K datasets. Overall, the main contributions of this work can be summarized as follows:
• We propose Piecing and Chipping enhanced Erasing Augmentation (PCEA), a novel data augmentation approach for the view generation in self-supervised learning algorithms.
• The proposed PCEA data augmentation approach also offers a novel method of utilizing multiply child-views. The method not only reduces the inconsistency in the view generation process but also regularizes the utilization of non-task-relevant information during the self-supervised learning progress.
• We conduct extensive experiments to demonstrate the effectiveness of our method. To the best of our knowledge, this is the first successful attempt in involving the InformationErasing family data augmentation in self-supervised learning algorithms.
2 RELATED WORK
2.1 SELF-SUPERVISED LEARNING
A wide range of self-supervised learning algorithms has been proposed to improve the quality of learned representations. Recent self-supervised learning algorithms can be divided into two categories: Non-contrastive ones that employ positive pairs of sample; Contrastive ones that employ negative pairs of samples. Here the terms positive/negative do not strictly refer to pairs of sample with similar/different semantic information, but pairs of views generated from the same or different anchor samples. In the family of non-contrastive self-supervised learning, BYOL (Grill et al., 2020) achieves an outstanding performance, which relies on two neural networks to represent the visual semantic information; the online and target network interact and learn from each other. SimSiam architecture (Chen & He, 2020) aims at enlarging the similarity between the two augmented views of one image with a shared encoder network. On the other hand, typical contrastive self-supervised learning applies multi-layer perceptions and stop-gradient tricks in case of collapsing (Chen et al., 2020b). To reduce the memory cost of large amount of negative samples, MoCo (He et al., 2020) proposes a momentum memory bank to record negative samples of previous steps. SWAV (Caron et al., 2020) is an online algorithm, which improves the contrastive method without the pairwise comparison. An online clustering loss is constructed, and a multi-crop strategy is introduced to increase the number of views without the extra computational overhead. In this study, we employ both of the self-supervised learning algorithm families to verify the effectiveness and efficiency of our proposed method.
2.2 DATA AUGMENTATION IN SELF-SUPERVISED LEARNING
Data augmentation in vanilla computer vision tasks helps to improve performance by increasing the amount of training data. Specifically, in practical implementation, this technology helps the model find the indistinguishable features in the image, that can reduce the over-fitting of the model like a regularizer. However, in the scenarios of self-supervised learning, the data augmentation plays a much different role. In the SimCLR paper (Chen et al., 2020b), the author carefully examine the effects of different data augmentation w.r.t. the downstream classification tasks. In their conclusion, the Gaussian blur for the input images and a stronger color distortion act as critical roles in obtaining an effective predicted result. SimCLR has experimentally demonstrated that the ImageNet linear classification accuracy at Top-1 is increased from 59.6% to 63.2% by stronger color distortion strength. This conclusion is further confirmed in Chen et al. (2020c), which shows that the accuracy of MoCo v1 with extra blur augmentation is increased by 2.8% to 63.4%. Furthermore, Tian et al. (2020) argues the proper data augmentation should reduce the mutual information between views while keeping task-relevant information, and develops the more aggressive info-min data augmentation approach. However, we consider the regular induced data augmentations are still limited in the desire of fully using the semantic information of visual representation in self-supervised learning. In this work, we focus on the family of data augmentation that masks out semantic information straightforwardly.
2.3 INFORMATION ERASING DATA AUGMENTATION
In paper (Noroozi & Favaro, 2016), a puzzle-based data augmentation method is developed with an unsupervised visual representation manner, which builds a CNN to solve Jigsaw puzzles as a pretext
task for enhancing classification and detection performance. In paper (DeVries & Taylor, 2017), a method named “CutOut” is designed for the objective classification task, which randomly masks square regions of training images and tries to find out less prominent features. These two methods can be regarded as early explorations of advanced data augmentation for object classification and detection-related tasks. With their convenience and efficiency, these two methods reached the highest level of computer vision-related tasks at that time and profoundly influenced other methods. However, this type of single splicing and deletion of images or image parts also limits the performance of the models.
In the object localization area, a weakly supervised framework named “Hide-and-Seek” is proposed in the paper (Singh & Lee, 2017), which randomly hides patches of the images and enhances the model. In this method, not only the most discriminative part of the image can be identified, but other parts with weak discriminative can also be identified. Through the overall organization of each part in the image, the discriminative performance of the model is improved. Another method, MixUp is designed in paper (Zhang et al., 2017), which aims to provide an image data augmentation idea with a convex combination of the training data. With the state-of-art performance in several tasks such as ImageNet2021, CIFAR-10, and CIFAR-100, the method Mixup inspires a potential clue for unsupervised, semi-supervised, and reinforcement learning. Different from the traditional regional dropout or patch removal methods, a CutMix data augmentation method is proposed in paper (Yun et al., 2019), which cuts patches and pasted them among training images with ground truth labels to enhance the reliability and stability of the model. These three methods provide new ideas for data augmentation, and the methods based on them also archived the highest level at that time. However, these methods still do not completely get rid of the relatively inflexible processing methods for images or patches, such as the proportion of the original image and the shape of the patches, which also restricts the performance of the model.
Based on these studies above, a regional dropout strategy is designed as GridMask in paper (Chen et al., 2020a), which provides a controllable method to delete patches of a training image. Compared with previous methods, this structured information dropping method is more effective and avoids random information dropping. At the same time, to overcome the shortages of squared patches in previous studies, a Gaussian filter-based data augmentation method “Milking CowMask” is proposed in paper (French et al., 2020). This method provides more flexibly shaped masks according to turnable parameters in Gaussian filter with fewer correlations and reaches a new state-of-art performance in related tasks. However, these methods discussed above focus on increasing the discriminability of samples in the entire dataset, which results in limited performance in the self-supervised learning cases.
3 METHODOLOGY
3.1 PCEA: PIECING AND CHIPPING ENHANCED ERASING AUGMENTATION
In this section, we first introduce how the views are generated in the proposed Piecing and Chipping enhanced Erasing Augmentation (PCEA) method. The overall approach is depicted in Figure 2.
We refer 2 pipelines of image transformation operations as T1 and T2. T2 is an ordinary adopted data augmentation method used in state-of-the-art self-supervised learning algorithms (in this paper, we employ the data augmentation strategy in MoCov2 (Chen et al., 2020d)). T1 is based on T2, with an additional masking operation (in this paper, we employ random erasing). The PCEA method is described as follows:
• Step 1: For each image x ∈ Rw×h×c1, we generate views x_v1,2,3,4 and x_k using T1 and T2, respectively. The x_v1,2,3,4 are denoted as “child-views”, while x_k is denoted as the “parent-view”.
• Step 2: We piece the 4 different child-views x_v1,2,3,4 (224×224) to obtain a larger image (448×448). This newly generated image is considered as an alternative image to obtain more positive samples with more substantial semantic information.
1For the rest of the paper, we let the w, h = 224 and omit the channel notation c, for sake of good readability.
• Step 3: We locate a candidate region (green rectangle in the figure) at the centroid of the newly generated image. 2 We then (uniformly) randomly select a segmentation point in the candidate region, and chip the image vertically and horizontally. Thus we obtain a new set of child-views x_q1,2,3,4.
• Step 4: The set of new child-views are resized to their original size (224×224), without preserving the aspect ration. We finally select 2 child-views as the new positive pairs of their parent-view x_k.
3.2 SIMILARITY REGULARIZATION LOSS
Although xq1 and xq2 can still be roughly judged as identical by human beings, the randomness of in choosing the erasing and the change of aspect ratio develop a considerable margin between the semantic information of visual representation in xq1 and xq2 . Meanwhile, the ordinary InfoNCE loss aligns both child-views to their parent view. To prevent the deep model implicitly aligns the child-views, we put an additional Similarity Regularization (SimReg) loss term to attain explicit discrimination between them. This loss term is implemented with a simple cosine similarity between the embedded representations of the child-views. According to this, the loss between q1 and q2 is defined as (1).
LSimReg = q1·q2
max(‖q1‖2, ‖q2‖2) (1)
In the experimental analysis, we find that the loss term is insensitive to the loss weight (so-called λ in many literature) empirically. Therefore, we leave the loss weight hyper-parameter to be 1.0 in all experimental configurations.
2Here we set the size of candidate region to be same as the ‘child-views’ (224×224), more details are discussed in the ablation study.
4 EXPERIMENTS
4.1 DATASETS & EXPERIMENTAL CONFIGURATIONS
Datasets. In this work, we conduct experiments on ImageNet ILSVRC-2021 dataset (Deng et al., 2009) with 1.28 million images in 1000 categories (ImageNet-1K) and a subset of images in 100 categories (ImageNet-100), which have been widely utilized as benchmark datasets (Tian et al., 2019; He et al., 2020; Grill et al., 2020; Hu et al., 2020). We also construct a more difficult subset of the original ImageNet-1K dataset, named Small-ImageNet-1000 (S-ImageNet-1K). S-ImageNet-1K only selects 10 percent of the images from each of the categories, which aims at reducing the richness of visual representations while maintaining the same representation distribution as the original ImageNet-1K. The evaluation is carried out by training a linear probe for the classification task, while keeping the weights of feature extractor frozen. For ImageNet-1K and ImageNet-100, we employ the commonly adopted classification accuracy as the evaluation metric. For S-ImageNet-1K, we employ the average correct classification rate among all 1000 categories proposed in Le & Yang (2015) as our evaluation metric.
In addition, we also employ the widely acknowledged MsCOCO (Lin et al., 2014) to verify the proposed PCEA with the object detection task. We fine-tune the ImageNet-1K pre-trained backbone models using the train2017 split, and perform evaluation on the val2017 split.
Configurations. We employ the vanilla ResNet-50 (He et al., 2016) equipped with an global average pooling on its head as our backbone architecture. We employ the projection head with only one linear layer for encoding fθ and fε. The feature dimensions of the output of ResNet-50 pooling layer and the embedding vector are 2048 and 128, respectively. For other hyper-parameters, we keep the same configuration as in MoCo v2 (Chen et al., 2020d) and SimSiam (Chen & He, 2021). In the MoCo v2 algorithm, the augmented views x_q1 and x_q2 are fed into the encoder network fθ with back-propagation. Meanwhile, x_k is represented as k = fε(x_k) without back-propagation, where fε(·) denotes the momentum encoder. In the SimSiam algorithm, we simply average the similarity between multiply child-views and the parent views. For the detection task, we adopt the commonly used Faster-RCNN with ResNet-50 as the baseline architecture.
Training. During training, a mini-batch size of 256 is used in 8 GPUs (Tesla V100 16G), and the initial learning rate is defined as 0.03. SGD (Loshchilov & Hutter, 2016) is used as the optimizer, the weight decay and the momentum update parameter is defined as 0.0001 and 0.9. 200/100 epochs are trained with a cosine learning rate decay for MoCo v2 and SimSiam, respectively. The number of negative samples in momentum queue and the sliding queue are 65536 and 32768, respectively. The temperature is set as 0.2.
4.2 EXPERIMENTAL RESULTS
ImageNet-100. Following previous work (Chen et al., 2020d; Chen & He, 2021), we evaluate nine data augmentation methods on MoCo v2 (Chen et al., 2020d) and SimSiam (Chen & He, 2021), where linear classifier are trained on frozen features from these methods. The comparison results are reported in Table 1. As can be seen, applying PCEA to MoCo v2 with negative samples involved achieves the best performance against baselines using other data augmentation methods. Particularly, our PCEA outperforms the vanilla baseline by 12.84% and 3.85% in terms of top-1 and top-5 accuracy. This demonstrates the effectiveness of our PCEA in learning discriminative representations by treating the positive and negative samples separately. We can also observe that SimSiam (Chen & He, 2021) with our PCEA achieves superior performance on the ImageNet-100 dataset against previous data augmentations, which further validates the generalizability of our PCEA to existing contrastive self-supervised methods.
ImageNet-1K. Furthermore, we compare our PCEA with existing state-of-the-art self-supervised methods under the linear classification setting in Table 2. From the results, we can observe that our PCEA outperforms MoCo v2, the vanilla baseline by a large margin, i.e., 3.3% in terms of top-1 accuracy. Meanwhile, we also achieve competitive results with previous methods in terms of top-1 and top-5 accuracy, which further demonstrates the advantage of our PCEA over baselines under the same linear classification setting.
S-ImageNet-1K. Table 3 reports the comparison results of linear classification on our S-ImageNet-1K dataset, a smaller dataset with the same distribution of the original ImageNet-1K, but lacks richness in visual representations. The proposed PCEA with MoCo v2 out-performs its baseline algorithm with a large margin (15.0%) in terms of top-1 accuracy. This superior performance validates the effectiveness and efficiency of PCEA in difficult configurations.
MsCOCO. Table 4 reports the detection performance (mAP) on the Ms COCO datasets. The proposed PCEA with MoCo v2 achieve the best result compared to state-of-the-art self-supervisely pre-trained backbones. Specifically, it outperforms its baseline MoCo v2 by 1.1% and supervisely trained model by 4.3%.
5 ABLATION STUDY
In this section, we conduct extensive ablation studies to explore how each step of our PCEA and the size of candidate region affect the final performance of our approach. Unless specified, we perform the experiments on ImageNet-100 dataset.
5.1 ABLATION ON EACH STEP OF PCEA
In order to explore the effect of each step of our PCEA on the final performance, we ablate each step and report the experimental results in Table 5. The methods with different steps are analysed, which describe the effectiveness of each steps in the Mosaic process. The top-1 accuracy on ImageNet-100 with the same data augmentation processing as MoCo v2 (Chen et al., 2020d) is 81.65% with two inputs(k and q). After that, the result benefits an promotion with two inputs as x_q1 and x_q2 in step 1, which increases the performance by 5.5%. As for the combination of step 2 and step3, we
use padding and random crop to modify the output images instead of resizing the splitted images into 224*224, which achieves a higher accuracy as 90.76%. Adding step 4 to previous three steps boosts the top-1 and top-5 accuracy to 94.49% and 99.62%, which indeed validates the rationality of interpolation in our PCEA to capture the fine-grained instance features.
5.2 ABLATION ON THE SIZE OF CANDIDATE REGION
To analyze how the size of the candidate region affect the final performance of our PCEA, we vary the size from 28, 56, 112, 224, 336, 448. The comparison results are reported in Table 6. As can be seen, our PCEA with the size of 224×224 achieves the best performance compared to other size settings. With the increase of the size of the candidate region, the performance of our PCEA degrades a lot, which could be caused by more background information introduced in the selected region. In the meanwhile, when the size of the candidate region is decreased to 112×112, our PCEA performs worse than the best result in terms of top-1 and top-5 accuracy. This further shows the importance of choosing the right size of the candidate region to learn more discriminative representations during pre-training.
5.3 ABLATION ON NUMBER OF VIEWS IN LOSS TERMS
We modify the number of child-views participated in the self-supervise learning loss (InfoNCE for MoCo, CosSim for SimSiam). The loss terms are duplicated and averaged according to the number child-views. We also conduct experiments on the effects of the SimReg loss term. Table 7 reports these results on both S-ImageNet-1K and ImageNet-1K. It can be seen that, two child-views achieves the best performance among different configurations. On the other hand, the SimReg loss functions overwhelmingly in the difficult S-ImageNet-1K dataset.
6 CONCLUSION
In this work, we propose Piecing and Chipping enhanced Erasing Augmentation (PCEA), a novel approach to employ information-erasing family of data augmentation methods in self-supervised learning scenarios. We exploit eight existing information-erasing data augmentation over previous methods on commonly-used benchmark datasets. We also equip the PCEA on 2 popular selfsupervised learning baseline algorithm. Both results prove that the effectiveness and efficiency of the proposed PCEA approach. We believe the involvement of information-erasing family of data augmentation has a border impact on further developing of self-supervised learning algorithm. | 1. What is the focus of the paper regarding image augmentation approaches?
2. What are the strengths of the proposed method, particularly its advantages over other information-erasing augmentation techniques?
3. What are the weaknesses of the paper, especially regarding its lack of understanding beyond empirical experiments?
4. How can the authors improve their approach to make it more convincing for conferences like ICLR?
5. Are there any suggestions for additional analyses or visualizations that could enhance the understanding of the embeddings learned using the proposed augmentation method? | Summary Of The Paper
Review | Summary Of The Paper
This work proposed a new image augmentation approach for generating "views" in self-supervised learning. The new image augmentation approach consists of several steps (1) standard augmentation (2) random erasing (3) piecing four augmented images into a large patch (4) chipping the large patch into four smaller patches. Experiments shows this approach is better than other information-erasing augmentation methods.
Review
This work asked a simple question and provided a clear answer. The simple question is "can we use information-erasing augmentation methods in self-supervised learning"? The clear answer is "yes". There are sufficient experiment results showing the benefit of "Piecing and Chipping" compared to other augmentation strategies.
There are no wrong questions, but is the answer good enough for ICLR? So far I am not convinced. What makes PCEA stand out as an augmentation approach for self-supervised leaning? There is little effort spent on understanding beyond empirical experiments. Would visualize the embeddings learned on the augmented images help? |
ICLR | Title
Piecing and Chipping: An effective solution for the information-erasing view generation in Self-supervised Learning
Abstract
In self-supervised learning frameworks, deep networks are optimized to align different views of an instance that contains the similar visual semantic information. The views are generated by conducting series of data augmentation to the anchor samples. Although the data augmentation operations are often designed to be aggressive and extensive to lower the mutual information between views, the family of Information-Erasing data augmentation that masks out region of images is barely considered. In this work, we propose the Piecing and Chipping enhanced Erasing Augmentation (PCEA) approach to making the self-supervised learning algorithms benefit from the effectiveness of Information-Erasing data augmentation. Specifically, we design a pipeline to generate mutually weakly related transformed views using random erasing and build corresponding loss terms to take advantage of these views. Extensive experiments demonstrate the effectiveness of our method. Particularly, applying our PCEA to MoCo v2 improves the baseline by 12.84%, 3.3% in terms of linear classification on ImageNet-100 and ImageNet-1K.
1 INTRODUCTION
The deep convolutional neural networks (CNNs) (Krizhevsky et al., 2012) have a great success in computer vision tasks, and in recent years, self-supervised learning (Oord et al., 2018; Chen et al., 2020b;c; He et al., 2020; Li et al., 2021; Zbontar et al., 2021; Grill et al., 2020; Chuang et al., 2020; Hu et al., 2020; Kim et al., 2020; Zhu et al., 2020; Caron et al., 2020; Xiao et al., 2020; Kalantidis et al., 2020) also achieve a great success and gained attentions because of its ability of reducing the labor cost on large-scale dataset annotation. Self-supervised learning aims at learning some forms of image representations by figuring out a pattern that can explain the image reasonably. The learned pattern can be used in downstream tasks, such as image classification, object detection, segmentation and etc. The self-supervised learning can be achieved majorly in two different styles: contrastive (Chen et al., 2020b;c; He et al., 2020; Chuang et al., 2020) and non-contrastive (Li et al., 2021; Zbontar et al., 2021; Grill et al., 2020) (though the detailed taxonomy of self-supervised learning is the topic of this study). The key component of both styles is the generation of views of the anchor sample.
The term “view” in self-supervised learning is roughly grounded as “augmented or transformed samples that maintain semantically similar information to the anchor sample”. In the computer vision tasks, the generation of views is accomplished a series of domain transformation operations, e.g. ColorJitter, RandomGrayscale, GaussianBlur. Former literature has examined the influence of the adaptation of different types of transformation. In these works, the composition of transformation operations is considered as the crucial part for learning good representations (Chen et al., 2020b). And the proper approach to reduce the mutual information between views while keeping task-relevant information intact (Tian et al., 2020).
One family of data augmentation that is commonly employed in computer vision tasks is “informationerasing”. By which, we refer to the methods that mask small regions of an image, such that the information concerning the objects in the image is erased (DeVries & Taylor, 2017; Yun et al., 2019; French et al., 2020; Singh & Lee, 2017; Chen et al., 2020a). However, this family of data augmentation is barely seen in self-supervised learning algorithms. While in (Chen et al., 2020b), the researchers also denote the Cutout (DeVries & Taylor, 2017) as an unflavored augmentation method to generate
views. We conjecture the primary reason for the inferior performance of the information-erasing family in the self-supervised learning algorithms is the inconsistency in preserving the task-relevant information. At the same time, information-erasing methods do not contribute to the reduction of the mutual information between views and anchor images in the non-masked regions. As a consequence, the generated views could be valueless for feature extractors to learning semantically meaningful representations.
In this work, we tackle the aforementioned drawbacks of inconsistency and mutual information reduction. We build an approach with the simple random erasing method to provide stable views with high qualities to improve the performance of self-supervised learning algorithms. We refer to our approach as Piecing and Chipping enhanced Erasing Augmentation (PCEA), which is built upon four motivations:
1. Multiple instances of erasing augmented images are generated and pieced, and we chip the larger image irregularly such that views would be weakly related by acquiring peripheral patches from other views;
2. We resize the irregularly chipped views without preserving the aspect ratio to reduce mutual information in the non-mask regions;
3. We feed more than one view (two in this work) to the “positive pair” loss head of the self-supervise algorithms to lessen the inconsistency brought by random selection of masked regions.
4. Considering the above approach for the view generation, we also regularize the predicted similarity between these views. Thus, we could largely prevent the non-task-relevant information from being memorized.
In simple terms, we spawn weakly related child-views that are similar to their parent-view while being considerably different from each other. The overall approach is shown in Figure 1 and Figure 2. The dark and light blue spots denote the negatives samples from different images. The red (k) and green ones (q1 and q2) indicate the positive pairs. The proposed method aims to enlarge the margin between the blue/non-blue spots and the distance between the spots in green color (q1 and q2). For positives in red and green, the margin between the red (k) and each green spot (q1 or q2) is narrowed.
In our experimental analysis, we firstly compare the effectiveness of the proposed approach with other information-erasing family data augmentation. We keep the comparison fair by offering multiple child-views for all the augmentation methods as demonstrating in Figure 1. We show that the piecingand-chipping-based random erasing augmentation out-performs other well-designed augmentation methods by a large margin. We also conduct experiments compared with other state-of-the-art self-supervised learning algorithms. Specifically, we employ MoCo v2 (Chen et al., 2020d) as our backbone and modify its view generation codes with the proposed PCEA. We then achieve a
competitive performance on the linear-probe classification task using the ImageNet-1K datasets. Overall, the main contributions of this work can be summarized as follows:
• We propose Piecing and Chipping enhanced Erasing Augmentation (PCEA), a novel data augmentation approach for the view generation in self-supervised learning algorithms.
• The proposed PCEA data augmentation approach also offers a novel method of utilizing multiply child-views. The method not only reduces the inconsistency in the view generation process but also regularizes the utilization of non-task-relevant information during the self-supervised learning progress.
• We conduct extensive experiments to demonstrate the effectiveness of our method. To the best of our knowledge, this is the first successful attempt in involving the InformationErasing family data augmentation in self-supervised learning algorithms.
2 RELATED WORK
2.1 SELF-SUPERVISED LEARNING
A wide range of self-supervised learning algorithms has been proposed to improve the quality of learned representations. Recent self-supervised learning algorithms can be divided into two categories: Non-contrastive ones that employ positive pairs of sample; Contrastive ones that employ negative pairs of samples. Here the terms positive/negative do not strictly refer to pairs of sample with similar/different semantic information, but pairs of views generated from the same or different anchor samples. In the family of non-contrastive self-supervised learning, BYOL (Grill et al., 2020) achieves an outstanding performance, which relies on two neural networks to represent the visual semantic information; the online and target network interact and learn from each other. SimSiam architecture (Chen & He, 2020) aims at enlarging the similarity between the two augmented views of one image with a shared encoder network. On the other hand, typical contrastive self-supervised learning applies multi-layer perceptions and stop-gradient tricks in case of collapsing (Chen et al., 2020b). To reduce the memory cost of large amount of negative samples, MoCo (He et al., 2020) proposes a momentum memory bank to record negative samples of previous steps. SWAV (Caron et al., 2020) is an online algorithm, which improves the contrastive method without the pairwise comparison. An online clustering loss is constructed, and a multi-crop strategy is introduced to increase the number of views without the extra computational overhead. In this study, we employ both of the self-supervised learning algorithm families to verify the effectiveness and efficiency of our proposed method.
2.2 DATA AUGMENTATION IN SELF-SUPERVISED LEARNING
Data augmentation in vanilla computer vision tasks helps to improve performance by increasing the amount of training data. Specifically, in practical implementation, this technology helps the model find the indistinguishable features in the image, that can reduce the over-fitting of the model like a regularizer. However, in the scenarios of self-supervised learning, the data augmentation plays a much different role. In the SimCLR paper (Chen et al., 2020b), the author carefully examine the effects of different data augmentation w.r.t. the downstream classification tasks. In their conclusion, the Gaussian blur for the input images and a stronger color distortion act as critical roles in obtaining an effective predicted result. SimCLR has experimentally demonstrated that the ImageNet linear classification accuracy at Top-1 is increased from 59.6% to 63.2% by stronger color distortion strength. This conclusion is further confirmed in Chen et al. (2020c), which shows that the accuracy of MoCo v1 with extra blur augmentation is increased by 2.8% to 63.4%. Furthermore, Tian et al. (2020) argues the proper data augmentation should reduce the mutual information between views while keeping task-relevant information, and develops the more aggressive info-min data augmentation approach. However, we consider the regular induced data augmentations are still limited in the desire of fully using the semantic information of visual representation in self-supervised learning. In this work, we focus on the family of data augmentation that masks out semantic information straightforwardly.
2.3 INFORMATION ERASING DATA AUGMENTATION
In paper (Noroozi & Favaro, 2016), a puzzle-based data augmentation method is developed with an unsupervised visual representation manner, which builds a CNN to solve Jigsaw puzzles as a pretext
task for enhancing classification and detection performance. In paper (DeVries & Taylor, 2017), a method named “CutOut” is designed for the objective classification task, which randomly masks square regions of training images and tries to find out less prominent features. These two methods can be regarded as early explorations of advanced data augmentation for object classification and detection-related tasks. With their convenience and efficiency, these two methods reached the highest level of computer vision-related tasks at that time and profoundly influenced other methods. However, this type of single splicing and deletion of images or image parts also limits the performance of the models.
In the object localization area, a weakly supervised framework named “Hide-and-Seek” is proposed in the paper (Singh & Lee, 2017), which randomly hides patches of the images and enhances the model. In this method, not only the most discriminative part of the image can be identified, but other parts with weak discriminative can also be identified. Through the overall organization of each part in the image, the discriminative performance of the model is improved. Another method, MixUp is designed in paper (Zhang et al., 2017), which aims to provide an image data augmentation idea with a convex combination of the training data. With the state-of-art performance in several tasks such as ImageNet2021, CIFAR-10, and CIFAR-100, the method Mixup inspires a potential clue for unsupervised, semi-supervised, and reinforcement learning. Different from the traditional regional dropout or patch removal methods, a CutMix data augmentation method is proposed in paper (Yun et al., 2019), which cuts patches and pasted them among training images with ground truth labels to enhance the reliability and stability of the model. These three methods provide new ideas for data augmentation, and the methods based on them also archived the highest level at that time. However, these methods still do not completely get rid of the relatively inflexible processing methods for images or patches, such as the proportion of the original image and the shape of the patches, which also restricts the performance of the model.
Based on these studies above, a regional dropout strategy is designed as GridMask in paper (Chen et al., 2020a), which provides a controllable method to delete patches of a training image. Compared with previous methods, this structured information dropping method is more effective and avoids random information dropping. At the same time, to overcome the shortages of squared patches in previous studies, a Gaussian filter-based data augmentation method “Milking CowMask” is proposed in paper (French et al., 2020). This method provides more flexibly shaped masks according to turnable parameters in Gaussian filter with fewer correlations and reaches a new state-of-art performance in related tasks. However, these methods discussed above focus on increasing the discriminability of samples in the entire dataset, which results in limited performance in the self-supervised learning cases.
3 METHODOLOGY
3.1 PCEA: PIECING AND CHIPPING ENHANCED ERASING AUGMENTATION
In this section, we first introduce how the views are generated in the proposed Piecing and Chipping enhanced Erasing Augmentation (PCEA) method. The overall approach is depicted in Figure 2.
We refer 2 pipelines of image transformation operations as T1 and T2. T2 is an ordinary adopted data augmentation method used in state-of-the-art self-supervised learning algorithms (in this paper, we employ the data augmentation strategy in MoCov2 (Chen et al., 2020d)). T1 is based on T2, with an additional masking operation (in this paper, we employ random erasing). The PCEA method is described as follows:
• Step 1: For each image x ∈ Rw×h×c1, we generate views x_v1,2,3,4 and x_k using T1 and T2, respectively. The x_v1,2,3,4 are denoted as “child-views”, while x_k is denoted as the “parent-view”.
• Step 2: We piece the 4 different child-views x_v1,2,3,4 (224×224) to obtain a larger image (448×448). This newly generated image is considered as an alternative image to obtain more positive samples with more substantial semantic information.
1For the rest of the paper, we let the w, h = 224 and omit the channel notation c, for sake of good readability.
• Step 3: We locate a candidate region (green rectangle in the figure) at the centroid of the newly generated image. 2 We then (uniformly) randomly select a segmentation point in the candidate region, and chip the image vertically and horizontally. Thus we obtain a new set of child-views x_q1,2,3,4.
• Step 4: The set of new child-views are resized to their original size (224×224), without preserving the aspect ration. We finally select 2 child-views as the new positive pairs of their parent-view x_k.
3.2 SIMILARITY REGULARIZATION LOSS
Although xq1 and xq2 can still be roughly judged as identical by human beings, the randomness of in choosing the erasing and the change of aspect ratio develop a considerable margin between the semantic information of visual representation in xq1 and xq2 . Meanwhile, the ordinary InfoNCE loss aligns both child-views to their parent view. To prevent the deep model implicitly aligns the child-views, we put an additional Similarity Regularization (SimReg) loss term to attain explicit discrimination between them. This loss term is implemented with a simple cosine similarity between the embedded representations of the child-views. According to this, the loss between q1 and q2 is defined as (1).
LSimReg = q1·q2
max(‖q1‖2, ‖q2‖2) (1)
In the experimental analysis, we find that the loss term is insensitive to the loss weight (so-called λ in many literature) empirically. Therefore, we leave the loss weight hyper-parameter to be 1.0 in all experimental configurations.
2Here we set the size of candidate region to be same as the ‘child-views’ (224×224), more details are discussed in the ablation study.
4 EXPERIMENTS
4.1 DATASETS & EXPERIMENTAL CONFIGURATIONS
Datasets. In this work, we conduct experiments on ImageNet ILSVRC-2021 dataset (Deng et al., 2009) with 1.28 million images in 1000 categories (ImageNet-1K) and a subset of images in 100 categories (ImageNet-100), which have been widely utilized as benchmark datasets (Tian et al., 2019; He et al., 2020; Grill et al., 2020; Hu et al., 2020). We also construct a more difficult subset of the original ImageNet-1K dataset, named Small-ImageNet-1000 (S-ImageNet-1K). S-ImageNet-1K only selects 10 percent of the images from each of the categories, which aims at reducing the richness of visual representations while maintaining the same representation distribution as the original ImageNet-1K. The evaluation is carried out by training a linear probe for the classification task, while keeping the weights of feature extractor frozen. For ImageNet-1K and ImageNet-100, we employ the commonly adopted classification accuracy as the evaluation metric. For S-ImageNet-1K, we employ the average correct classification rate among all 1000 categories proposed in Le & Yang (2015) as our evaluation metric.
In addition, we also employ the widely acknowledged MsCOCO (Lin et al., 2014) to verify the proposed PCEA with the object detection task. We fine-tune the ImageNet-1K pre-trained backbone models using the train2017 split, and perform evaluation on the val2017 split.
Configurations. We employ the vanilla ResNet-50 (He et al., 2016) equipped with an global average pooling on its head as our backbone architecture. We employ the projection head with only one linear layer for encoding fθ and fε. The feature dimensions of the output of ResNet-50 pooling layer and the embedding vector are 2048 and 128, respectively. For other hyper-parameters, we keep the same configuration as in MoCo v2 (Chen et al., 2020d) and SimSiam (Chen & He, 2021). In the MoCo v2 algorithm, the augmented views x_q1 and x_q2 are fed into the encoder network fθ with back-propagation. Meanwhile, x_k is represented as k = fε(x_k) without back-propagation, where fε(·) denotes the momentum encoder. In the SimSiam algorithm, we simply average the similarity between multiply child-views and the parent views. For the detection task, we adopt the commonly used Faster-RCNN with ResNet-50 as the baseline architecture.
Training. During training, a mini-batch size of 256 is used in 8 GPUs (Tesla V100 16G), and the initial learning rate is defined as 0.03. SGD (Loshchilov & Hutter, 2016) is used as the optimizer, the weight decay and the momentum update parameter is defined as 0.0001 and 0.9. 200/100 epochs are trained with a cosine learning rate decay for MoCo v2 and SimSiam, respectively. The number of negative samples in momentum queue and the sliding queue are 65536 and 32768, respectively. The temperature is set as 0.2.
4.2 EXPERIMENTAL RESULTS
ImageNet-100. Following previous work (Chen et al., 2020d; Chen & He, 2021), we evaluate nine data augmentation methods on MoCo v2 (Chen et al., 2020d) and SimSiam (Chen & He, 2021), where linear classifier are trained on frozen features from these methods. The comparison results are reported in Table 1. As can be seen, applying PCEA to MoCo v2 with negative samples involved achieves the best performance against baselines using other data augmentation methods. Particularly, our PCEA outperforms the vanilla baseline by 12.84% and 3.85% in terms of top-1 and top-5 accuracy. This demonstrates the effectiveness of our PCEA in learning discriminative representations by treating the positive and negative samples separately. We can also observe that SimSiam (Chen & He, 2021) with our PCEA achieves superior performance on the ImageNet-100 dataset against previous data augmentations, which further validates the generalizability of our PCEA to existing contrastive self-supervised methods.
ImageNet-1K. Furthermore, we compare our PCEA with existing state-of-the-art self-supervised methods under the linear classification setting in Table 2. From the results, we can observe that our PCEA outperforms MoCo v2, the vanilla baseline by a large margin, i.e., 3.3% in terms of top-1 accuracy. Meanwhile, we also achieve competitive results with previous methods in terms of top-1 and top-5 accuracy, which further demonstrates the advantage of our PCEA over baselines under the same linear classification setting.
S-ImageNet-1K. Table 3 reports the comparison results of linear classification on our S-ImageNet-1K dataset, a smaller dataset with the same distribution of the original ImageNet-1K, but lacks richness in visual representations. The proposed PCEA with MoCo v2 out-performs its baseline algorithm with a large margin (15.0%) in terms of top-1 accuracy. This superior performance validates the effectiveness and efficiency of PCEA in difficult configurations.
MsCOCO. Table 4 reports the detection performance (mAP) on the Ms COCO datasets. The proposed PCEA with MoCo v2 achieve the best result compared to state-of-the-art self-supervisely pre-trained backbones. Specifically, it outperforms its baseline MoCo v2 by 1.1% and supervisely trained model by 4.3%.
5 ABLATION STUDY
In this section, we conduct extensive ablation studies to explore how each step of our PCEA and the size of candidate region affect the final performance of our approach. Unless specified, we perform the experiments on ImageNet-100 dataset.
5.1 ABLATION ON EACH STEP OF PCEA
In order to explore the effect of each step of our PCEA on the final performance, we ablate each step and report the experimental results in Table 5. The methods with different steps are analysed, which describe the effectiveness of each steps in the Mosaic process. The top-1 accuracy on ImageNet-100 with the same data augmentation processing as MoCo v2 (Chen et al., 2020d) is 81.65% with two inputs(k and q). After that, the result benefits an promotion with two inputs as x_q1 and x_q2 in step 1, which increases the performance by 5.5%. As for the combination of step 2 and step3, we
use padding and random crop to modify the output images instead of resizing the splitted images into 224*224, which achieves a higher accuracy as 90.76%. Adding step 4 to previous three steps boosts the top-1 and top-5 accuracy to 94.49% and 99.62%, which indeed validates the rationality of interpolation in our PCEA to capture the fine-grained instance features.
5.2 ABLATION ON THE SIZE OF CANDIDATE REGION
To analyze how the size of the candidate region affect the final performance of our PCEA, we vary the size from 28, 56, 112, 224, 336, 448. The comparison results are reported in Table 6. As can be seen, our PCEA with the size of 224×224 achieves the best performance compared to other size settings. With the increase of the size of the candidate region, the performance of our PCEA degrades a lot, which could be caused by more background information introduced in the selected region. In the meanwhile, when the size of the candidate region is decreased to 112×112, our PCEA performs worse than the best result in terms of top-1 and top-5 accuracy. This further shows the importance of choosing the right size of the candidate region to learn more discriminative representations during pre-training.
5.3 ABLATION ON NUMBER OF VIEWS IN LOSS TERMS
We modify the number of child-views participated in the self-supervise learning loss (InfoNCE for MoCo, CosSim for SimSiam). The loss terms are duplicated and averaged according to the number child-views. We also conduct experiments on the effects of the SimReg loss term. Table 7 reports these results on both S-ImageNet-1K and ImageNet-1K. It can be seen that, two child-views achieves the best performance among different configurations. On the other hand, the SimReg loss functions overwhelmingly in the difficult S-ImageNet-1K dataset.
6 CONCLUSION
In this work, we propose Piecing and Chipping enhanced Erasing Augmentation (PCEA), a novel approach to employ information-erasing family of data augmentation methods in self-supervised learning scenarios. We exploit eight existing information-erasing data augmentation over previous methods on commonly-used benchmark datasets. We also equip the PCEA on 2 popular selfsupervised learning baseline algorithm. Both results prove that the effectiveness and efficiency of the proposed PCEA approach. We believe the involvement of information-erasing family of data augmentation has a border impact on further developing of self-supervised learning algorithm. | 1. What is the focus and contribution of the paper in terms of data augmentation for contrastive learning?
2. What are the strengths of the proposed approach, particularly in its simplicity and experimental performance?
3. What are the weaknesses of the paper regarding its lack of clarity on the augmentation's technical significance and its comparison to existing works?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
5. Are there any questions or concerns regarding the paper's experiments and their relevance to state-of-the-art methods? | Summary Of The Paper
Review | Summary Of The Paper
The paper describes a data augmentation procedure for creating artificial views for contrastive learning. Specifically, the idea is named 'Piecing and Chipping' where a set of standard augmented views of an anchor image is pieced together and then chipped away via random four-way splits. The pieces are then used as new augmentations for contrastive learning. Experiments on imagenet100 and imagenet-1K linear evaluation show promise. Experiments are also provided on transfer learning tasks demonstrating promising performances.
Review
Strengths:
The paper is easy to follow (apart from a few minor typos and grammatical errors) and the presented idea is quite simple.
Experiments demonstrate promise on imagenet-100 and 1K.
Weakness:
I think the main weakness of this method is perhaps that it is not clear what the proposed augmentation is helping with? Of course, it leads to some improved results, but from a technical perspective, the idea looks quite similar to standard random crops, and the recently proposed multicrops (in SWaV Caron et al., NeurIPS 2020), where some smaller crops are made to have embeddings similar to a global crop. Thus, while the proposed idea is quite creative, its scientific merit needs more insights, especially with regard to ideas already out there.
The experiments show some improvements on some baselines, however I would think the state of the art on imagenet-1K and such are quite higher than the methods reported as baselines. For example, SWaV is already at about 75+ and Dino Caron et al.) is even higher (although the paper seems to report weaker numbers for some reason). Thus, the paper should provide results against these comparisons to see if the proposed augmentations show anything better over these methods. |
ICLR | Title
Piecing and Chipping: An effective solution for the information-erasing view generation in Self-supervised Learning
Abstract
In self-supervised learning frameworks, deep networks are optimized to align different views of an instance that contains the similar visual semantic information. The views are generated by conducting series of data augmentation to the anchor samples. Although the data augmentation operations are often designed to be aggressive and extensive to lower the mutual information between views, the family of Information-Erasing data augmentation that masks out region of images is barely considered. In this work, we propose the Piecing and Chipping enhanced Erasing Augmentation (PCEA) approach to making the self-supervised learning algorithms benefit from the effectiveness of Information-Erasing data augmentation. Specifically, we design a pipeline to generate mutually weakly related transformed views using random erasing and build corresponding loss terms to take advantage of these views. Extensive experiments demonstrate the effectiveness of our method. Particularly, applying our PCEA to MoCo v2 improves the baseline by 12.84%, 3.3% in terms of linear classification on ImageNet-100 and ImageNet-1K.
1 INTRODUCTION
The deep convolutional neural networks (CNNs) (Krizhevsky et al., 2012) have a great success in computer vision tasks, and in recent years, self-supervised learning (Oord et al., 2018; Chen et al., 2020b;c; He et al., 2020; Li et al., 2021; Zbontar et al., 2021; Grill et al., 2020; Chuang et al., 2020; Hu et al., 2020; Kim et al., 2020; Zhu et al., 2020; Caron et al., 2020; Xiao et al., 2020; Kalantidis et al., 2020) also achieve a great success and gained attentions because of its ability of reducing the labor cost on large-scale dataset annotation. Self-supervised learning aims at learning some forms of image representations by figuring out a pattern that can explain the image reasonably. The learned pattern can be used in downstream tasks, such as image classification, object detection, segmentation and etc. The self-supervised learning can be achieved majorly in two different styles: contrastive (Chen et al., 2020b;c; He et al., 2020; Chuang et al., 2020) and non-contrastive (Li et al., 2021; Zbontar et al., 2021; Grill et al., 2020) (though the detailed taxonomy of self-supervised learning is the topic of this study). The key component of both styles is the generation of views of the anchor sample.
The term “view” in self-supervised learning is roughly grounded as “augmented or transformed samples that maintain semantically similar information to the anchor sample”. In the computer vision tasks, the generation of views is accomplished a series of domain transformation operations, e.g. ColorJitter, RandomGrayscale, GaussianBlur. Former literature has examined the influence of the adaptation of different types of transformation. In these works, the composition of transformation operations is considered as the crucial part for learning good representations (Chen et al., 2020b). And the proper approach to reduce the mutual information between views while keeping task-relevant information intact (Tian et al., 2020).
One family of data augmentation that is commonly employed in computer vision tasks is “informationerasing”. By which, we refer to the methods that mask small regions of an image, such that the information concerning the objects in the image is erased (DeVries & Taylor, 2017; Yun et al., 2019; French et al., 2020; Singh & Lee, 2017; Chen et al., 2020a). However, this family of data augmentation is barely seen in self-supervised learning algorithms. While in (Chen et al., 2020b), the researchers also denote the Cutout (DeVries & Taylor, 2017) as an unflavored augmentation method to generate
views. We conjecture the primary reason for the inferior performance of the information-erasing family in the self-supervised learning algorithms is the inconsistency in preserving the task-relevant information. At the same time, information-erasing methods do not contribute to the reduction of the mutual information between views and anchor images in the non-masked regions. As a consequence, the generated views could be valueless for feature extractors to learning semantically meaningful representations.
In this work, we tackle the aforementioned drawbacks of inconsistency and mutual information reduction. We build an approach with the simple random erasing method to provide stable views with high qualities to improve the performance of self-supervised learning algorithms. We refer to our approach as Piecing and Chipping enhanced Erasing Augmentation (PCEA), which is built upon four motivations:
1. Multiple instances of erasing augmented images are generated and pieced, and we chip the larger image irregularly such that views would be weakly related by acquiring peripheral patches from other views;
2. We resize the irregularly chipped views without preserving the aspect ratio to reduce mutual information in the non-mask regions;
3. We feed more than one view (two in this work) to the “positive pair” loss head of the self-supervise algorithms to lessen the inconsistency brought by random selection of masked regions.
4. Considering the above approach for the view generation, we also regularize the predicted similarity between these views. Thus, we could largely prevent the non-task-relevant information from being memorized.
In simple terms, we spawn weakly related child-views that are similar to their parent-view while being considerably different from each other. The overall approach is shown in Figure 1 and Figure 2. The dark and light blue spots denote the negatives samples from different images. The red (k) and green ones (q1 and q2) indicate the positive pairs. The proposed method aims to enlarge the margin between the blue/non-blue spots and the distance between the spots in green color (q1 and q2). For positives in red and green, the margin between the red (k) and each green spot (q1 or q2) is narrowed.
In our experimental analysis, we firstly compare the effectiveness of the proposed approach with other information-erasing family data augmentation. We keep the comparison fair by offering multiple child-views for all the augmentation methods as demonstrating in Figure 1. We show that the piecingand-chipping-based random erasing augmentation out-performs other well-designed augmentation methods by a large margin. We also conduct experiments compared with other state-of-the-art self-supervised learning algorithms. Specifically, we employ MoCo v2 (Chen et al., 2020d) as our backbone and modify its view generation codes with the proposed PCEA. We then achieve a
competitive performance on the linear-probe classification task using the ImageNet-1K datasets. Overall, the main contributions of this work can be summarized as follows:
• We propose Piecing and Chipping enhanced Erasing Augmentation (PCEA), a novel data augmentation approach for the view generation in self-supervised learning algorithms.
• The proposed PCEA data augmentation approach also offers a novel method of utilizing multiply child-views. The method not only reduces the inconsistency in the view generation process but also regularizes the utilization of non-task-relevant information during the self-supervised learning progress.
• We conduct extensive experiments to demonstrate the effectiveness of our method. To the best of our knowledge, this is the first successful attempt in involving the InformationErasing family data augmentation in self-supervised learning algorithms.
2 RELATED WORK
2.1 SELF-SUPERVISED LEARNING
A wide range of self-supervised learning algorithms has been proposed to improve the quality of learned representations. Recent self-supervised learning algorithms can be divided into two categories: Non-contrastive ones that employ positive pairs of sample; Contrastive ones that employ negative pairs of samples. Here the terms positive/negative do not strictly refer to pairs of sample with similar/different semantic information, but pairs of views generated from the same or different anchor samples. In the family of non-contrastive self-supervised learning, BYOL (Grill et al., 2020) achieves an outstanding performance, which relies on two neural networks to represent the visual semantic information; the online and target network interact and learn from each other. SimSiam architecture (Chen & He, 2020) aims at enlarging the similarity between the two augmented views of one image with a shared encoder network. On the other hand, typical contrastive self-supervised learning applies multi-layer perceptions and stop-gradient tricks in case of collapsing (Chen et al., 2020b). To reduce the memory cost of large amount of negative samples, MoCo (He et al., 2020) proposes a momentum memory bank to record negative samples of previous steps. SWAV (Caron et al., 2020) is an online algorithm, which improves the contrastive method without the pairwise comparison. An online clustering loss is constructed, and a multi-crop strategy is introduced to increase the number of views without the extra computational overhead. In this study, we employ both of the self-supervised learning algorithm families to verify the effectiveness and efficiency of our proposed method.
2.2 DATA AUGMENTATION IN SELF-SUPERVISED LEARNING
Data augmentation in vanilla computer vision tasks helps to improve performance by increasing the amount of training data. Specifically, in practical implementation, this technology helps the model find the indistinguishable features in the image, that can reduce the over-fitting of the model like a regularizer. However, in the scenarios of self-supervised learning, the data augmentation plays a much different role. In the SimCLR paper (Chen et al., 2020b), the author carefully examine the effects of different data augmentation w.r.t. the downstream classification tasks. In their conclusion, the Gaussian blur for the input images and a stronger color distortion act as critical roles in obtaining an effective predicted result. SimCLR has experimentally demonstrated that the ImageNet linear classification accuracy at Top-1 is increased from 59.6% to 63.2% by stronger color distortion strength. This conclusion is further confirmed in Chen et al. (2020c), which shows that the accuracy of MoCo v1 with extra blur augmentation is increased by 2.8% to 63.4%. Furthermore, Tian et al. (2020) argues the proper data augmentation should reduce the mutual information between views while keeping task-relevant information, and develops the more aggressive info-min data augmentation approach. However, we consider the regular induced data augmentations are still limited in the desire of fully using the semantic information of visual representation in self-supervised learning. In this work, we focus on the family of data augmentation that masks out semantic information straightforwardly.
2.3 INFORMATION ERASING DATA AUGMENTATION
In paper (Noroozi & Favaro, 2016), a puzzle-based data augmentation method is developed with an unsupervised visual representation manner, which builds a CNN to solve Jigsaw puzzles as a pretext
task for enhancing classification and detection performance. In paper (DeVries & Taylor, 2017), a method named “CutOut” is designed for the objective classification task, which randomly masks square regions of training images and tries to find out less prominent features. These two methods can be regarded as early explorations of advanced data augmentation for object classification and detection-related tasks. With their convenience and efficiency, these two methods reached the highest level of computer vision-related tasks at that time and profoundly influenced other methods. However, this type of single splicing and deletion of images or image parts also limits the performance of the models.
In the object localization area, a weakly supervised framework named “Hide-and-Seek” is proposed in the paper (Singh & Lee, 2017), which randomly hides patches of the images and enhances the model. In this method, not only the most discriminative part of the image can be identified, but other parts with weak discriminative can also be identified. Through the overall organization of each part in the image, the discriminative performance of the model is improved. Another method, MixUp is designed in paper (Zhang et al., 2017), which aims to provide an image data augmentation idea with a convex combination of the training data. With the state-of-art performance in several tasks such as ImageNet2021, CIFAR-10, and CIFAR-100, the method Mixup inspires a potential clue for unsupervised, semi-supervised, and reinforcement learning. Different from the traditional regional dropout or patch removal methods, a CutMix data augmentation method is proposed in paper (Yun et al., 2019), which cuts patches and pasted them among training images with ground truth labels to enhance the reliability and stability of the model. These three methods provide new ideas for data augmentation, and the methods based on them also archived the highest level at that time. However, these methods still do not completely get rid of the relatively inflexible processing methods for images or patches, such as the proportion of the original image and the shape of the patches, which also restricts the performance of the model.
Based on these studies above, a regional dropout strategy is designed as GridMask in paper (Chen et al., 2020a), which provides a controllable method to delete patches of a training image. Compared with previous methods, this structured information dropping method is more effective and avoids random information dropping. At the same time, to overcome the shortages of squared patches in previous studies, a Gaussian filter-based data augmentation method “Milking CowMask” is proposed in paper (French et al., 2020). This method provides more flexibly shaped masks according to turnable parameters in Gaussian filter with fewer correlations and reaches a new state-of-art performance in related tasks. However, these methods discussed above focus on increasing the discriminability of samples in the entire dataset, which results in limited performance in the self-supervised learning cases.
3 METHODOLOGY
3.1 PCEA: PIECING AND CHIPPING ENHANCED ERASING AUGMENTATION
In this section, we first introduce how the views are generated in the proposed Piecing and Chipping enhanced Erasing Augmentation (PCEA) method. The overall approach is depicted in Figure 2.
We refer 2 pipelines of image transformation operations as T1 and T2. T2 is an ordinary adopted data augmentation method used in state-of-the-art self-supervised learning algorithms (in this paper, we employ the data augmentation strategy in MoCov2 (Chen et al., 2020d)). T1 is based on T2, with an additional masking operation (in this paper, we employ random erasing). The PCEA method is described as follows:
• Step 1: For each image x ∈ Rw×h×c1, we generate views x_v1,2,3,4 and x_k using T1 and T2, respectively. The x_v1,2,3,4 are denoted as “child-views”, while x_k is denoted as the “parent-view”.
• Step 2: We piece the 4 different child-views x_v1,2,3,4 (224×224) to obtain a larger image (448×448). This newly generated image is considered as an alternative image to obtain more positive samples with more substantial semantic information.
1For the rest of the paper, we let the w, h = 224 and omit the channel notation c, for sake of good readability.
• Step 3: We locate a candidate region (green rectangle in the figure) at the centroid of the newly generated image. 2 We then (uniformly) randomly select a segmentation point in the candidate region, and chip the image vertically and horizontally. Thus we obtain a new set of child-views x_q1,2,3,4.
• Step 4: The set of new child-views are resized to their original size (224×224), without preserving the aspect ration. We finally select 2 child-views as the new positive pairs of their parent-view x_k.
3.2 SIMILARITY REGULARIZATION LOSS
Although xq1 and xq2 can still be roughly judged as identical by human beings, the randomness of in choosing the erasing and the change of aspect ratio develop a considerable margin between the semantic information of visual representation in xq1 and xq2 . Meanwhile, the ordinary InfoNCE loss aligns both child-views to their parent view. To prevent the deep model implicitly aligns the child-views, we put an additional Similarity Regularization (SimReg) loss term to attain explicit discrimination between them. This loss term is implemented with a simple cosine similarity between the embedded representations of the child-views. According to this, the loss between q1 and q2 is defined as (1).
LSimReg = q1·q2
max(‖q1‖2, ‖q2‖2) (1)
In the experimental analysis, we find that the loss term is insensitive to the loss weight (so-called λ in many literature) empirically. Therefore, we leave the loss weight hyper-parameter to be 1.0 in all experimental configurations.
2Here we set the size of candidate region to be same as the ‘child-views’ (224×224), more details are discussed in the ablation study.
4 EXPERIMENTS
4.1 DATASETS & EXPERIMENTAL CONFIGURATIONS
Datasets. In this work, we conduct experiments on ImageNet ILSVRC-2021 dataset (Deng et al., 2009) with 1.28 million images in 1000 categories (ImageNet-1K) and a subset of images in 100 categories (ImageNet-100), which have been widely utilized as benchmark datasets (Tian et al., 2019; He et al., 2020; Grill et al., 2020; Hu et al., 2020). We also construct a more difficult subset of the original ImageNet-1K dataset, named Small-ImageNet-1000 (S-ImageNet-1K). S-ImageNet-1K only selects 10 percent of the images from each of the categories, which aims at reducing the richness of visual representations while maintaining the same representation distribution as the original ImageNet-1K. The evaluation is carried out by training a linear probe for the classification task, while keeping the weights of feature extractor frozen. For ImageNet-1K and ImageNet-100, we employ the commonly adopted classification accuracy as the evaluation metric. For S-ImageNet-1K, we employ the average correct classification rate among all 1000 categories proposed in Le & Yang (2015) as our evaluation metric.
In addition, we also employ the widely acknowledged MsCOCO (Lin et al., 2014) to verify the proposed PCEA with the object detection task. We fine-tune the ImageNet-1K pre-trained backbone models using the train2017 split, and perform evaluation on the val2017 split.
Configurations. We employ the vanilla ResNet-50 (He et al., 2016) equipped with an global average pooling on its head as our backbone architecture. We employ the projection head with only one linear layer for encoding fθ and fε. The feature dimensions of the output of ResNet-50 pooling layer and the embedding vector are 2048 and 128, respectively. For other hyper-parameters, we keep the same configuration as in MoCo v2 (Chen et al., 2020d) and SimSiam (Chen & He, 2021). In the MoCo v2 algorithm, the augmented views x_q1 and x_q2 are fed into the encoder network fθ with back-propagation. Meanwhile, x_k is represented as k = fε(x_k) without back-propagation, where fε(·) denotes the momentum encoder. In the SimSiam algorithm, we simply average the similarity between multiply child-views and the parent views. For the detection task, we adopt the commonly used Faster-RCNN with ResNet-50 as the baseline architecture.
Training. During training, a mini-batch size of 256 is used in 8 GPUs (Tesla V100 16G), and the initial learning rate is defined as 0.03. SGD (Loshchilov & Hutter, 2016) is used as the optimizer, the weight decay and the momentum update parameter is defined as 0.0001 and 0.9. 200/100 epochs are trained with a cosine learning rate decay for MoCo v2 and SimSiam, respectively. The number of negative samples in momentum queue and the sliding queue are 65536 and 32768, respectively. The temperature is set as 0.2.
4.2 EXPERIMENTAL RESULTS
ImageNet-100. Following previous work (Chen et al., 2020d; Chen & He, 2021), we evaluate nine data augmentation methods on MoCo v2 (Chen et al., 2020d) and SimSiam (Chen & He, 2021), where linear classifier are trained on frozen features from these methods. The comparison results are reported in Table 1. As can be seen, applying PCEA to MoCo v2 with negative samples involved achieves the best performance against baselines using other data augmentation methods. Particularly, our PCEA outperforms the vanilla baseline by 12.84% and 3.85% in terms of top-1 and top-5 accuracy. This demonstrates the effectiveness of our PCEA in learning discriminative representations by treating the positive and negative samples separately. We can also observe that SimSiam (Chen & He, 2021) with our PCEA achieves superior performance on the ImageNet-100 dataset against previous data augmentations, which further validates the generalizability of our PCEA to existing contrastive self-supervised methods.
ImageNet-1K. Furthermore, we compare our PCEA with existing state-of-the-art self-supervised methods under the linear classification setting in Table 2. From the results, we can observe that our PCEA outperforms MoCo v2, the vanilla baseline by a large margin, i.e., 3.3% in terms of top-1 accuracy. Meanwhile, we also achieve competitive results with previous methods in terms of top-1 and top-5 accuracy, which further demonstrates the advantage of our PCEA over baselines under the same linear classification setting.
S-ImageNet-1K. Table 3 reports the comparison results of linear classification on our S-ImageNet-1K dataset, a smaller dataset with the same distribution of the original ImageNet-1K, but lacks richness in visual representations. The proposed PCEA with MoCo v2 out-performs its baseline algorithm with a large margin (15.0%) in terms of top-1 accuracy. This superior performance validates the effectiveness and efficiency of PCEA in difficult configurations.
MsCOCO. Table 4 reports the detection performance (mAP) on the Ms COCO datasets. The proposed PCEA with MoCo v2 achieve the best result compared to state-of-the-art self-supervisely pre-trained backbones. Specifically, it outperforms its baseline MoCo v2 by 1.1% and supervisely trained model by 4.3%.
5 ABLATION STUDY
In this section, we conduct extensive ablation studies to explore how each step of our PCEA and the size of candidate region affect the final performance of our approach. Unless specified, we perform the experiments on ImageNet-100 dataset.
5.1 ABLATION ON EACH STEP OF PCEA
In order to explore the effect of each step of our PCEA on the final performance, we ablate each step and report the experimental results in Table 5. The methods with different steps are analysed, which describe the effectiveness of each steps in the Mosaic process. The top-1 accuracy on ImageNet-100 with the same data augmentation processing as MoCo v2 (Chen et al., 2020d) is 81.65% with two inputs(k and q). After that, the result benefits an promotion with two inputs as x_q1 and x_q2 in step 1, which increases the performance by 5.5%. As for the combination of step 2 and step3, we
use padding and random crop to modify the output images instead of resizing the splitted images into 224*224, which achieves a higher accuracy as 90.76%. Adding step 4 to previous three steps boosts the top-1 and top-5 accuracy to 94.49% and 99.62%, which indeed validates the rationality of interpolation in our PCEA to capture the fine-grained instance features.
5.2 ABLATION ON THE SIZE OF CANDIDATE REGION
To analyze how the size of the candidate region affect the final performance of our PCEA, we vary the size from 28, 56, 112, 224, 336, 448. The comparison results are reported in Table 6. As can be seen, our PCEA with the size of 224×224 achieves the best performance compared to other size settings. With the increase of the size of the candidate region, the performance of our PCEA degrades a lot, which could be caused by more background information introduced in the selected region. In the meanwhile, when the size of the candidate region is decreased to 112×112, our PCEA performs worse than the best result in terms of top-1 and top-5 accuracy. This further shows the importance of choosing the right size of the candidate region to learn more discriminative representations during pre-training.
5.3 ABLATION ON NUMBER OF VIEWS IN LOSS TERMS
We modify the number of child-views participated in the self-supervise learning loss (InfoNCE for MoCo, CosSim for SimSiam). The loss terms are duplicated and averaged according to the number child-views. We also conduct experiments on the effects of the SimReg loss term. Table 7 reports these results on both S-ImageNet-1K and ImageNet-1K. It can be seen that, two child-views achieves the best performance among different configurations. On the other hand, the SimReg loss functions overwhelmingly in the difficult S-ImageNet-1K dataset.
6 CONCLUSION
In this work, we propose Piecing and Chipping enhanced Erasing Augmentation (PCEA), a novel approach to employ information-erasing family of data augmentation methods in self-supervised learning scenarios. We exploit eight existing information-erasing data augmentation over previous methods on commonly-used benchmark datasets. We also equip the PCEA on 2 popular selfsupervised learning baseline algorithm. Both results prove that the effectiveness and efficiency of the proposed PCEA approach. We believe the involvement of information-erasing family of data augmentation has a border impact on further developing of self-supervised learning algorithm. | 1. What is the focus and contribution of the paper on data augmentation?
2. What are the strengths and weaknesses of the proposed method, particularly in its application to ImageNet-100 and MsCOCO?
3. Do you have any concerns or questions regarding the motivations and procedures of the data augmentation pipeline?
4. How do you assess the effectiveness and efficiency of the SimReg loss in regularizing the utilization of child-views?
5. Are there any limitations or challenges in implementing the PCEA method in real-world scenarios due to the number of hyperparameters involved? | Summary Of The Paper
Review | Summary Of The Paper
This work proposes a data augmentation method, namely PCEA. The augmentation process is built based on the intuition that spawning child-views should be different but similar to the parent-view. This work also presents the SimReg loss to regularize the utilization of the child-views.
Review
the proposed method significantly improves the accuracy of linear classification on ImageNet-100. However, the improvement on MsCOCO is marginal. It seems that the proposed method has limited generalizability.
The data augmentation pipeline is reasonable, but the motivations of the method are unclear. For instance, in step 2, I wonder why the authors propose to piece four different child-views? In step 3, how to locate the candidate region and why?
There are too many hyper-parameters in the PCEA method. In Section 5, a few ones are determined by grid search. But searching all of them is prohibitively expensive. So, how to make sure that the PCEA method work in real-world scenarios?
I am confused about the Similarity Regularization loss. What do q_1 and q_2 mean in Equation 1? More importantly, what is the training objective in the experiments? |
ICLR | Title
CNNSAT: Fast, Accurate Boolean Satisfiability using Convolutional Neural Networks
Abstract
Boolean satisfiability (SAT) is one of the most well-known NP-complete problems and has been extensively studied. State-of-the-art solvers exist and have found a wide range of applications. However, they still do not scale well to formulas with hundreds of variables for uniform 3-SAT problems. To tackle this fundamental scalability challenge, we introduce CNNSAT, a fast and accurate statistical decision procedure for SAT based on convolutional neural networks. CNNSAT’s effectiveness is due to a precise and compact representation of Boolean formulas. On both real and synthetic formulas, CNNSAT is highly accurate and orders of magnitude faster than the state-of-the-art solver Z3. We also describe how to extend CNNSAT to predict satisfying assignments when it predicts a formula to be satisfiable.
N/A
Boolean satisfiability (SAT) is one of the most well-known NP-complete problems and has been extensively studied. State-of-the-art solvers exist and have found a wide range of applications. However, they still do not scale well to formulas with hundreds of variables for uniform 3-SAT problems. To tackle this fundamental scalability challenge, we introduce CNNSAT, a fast and accurate statistical decision procedure for SAT based on convolutional neural networks. CNNSAT’s effectiveness is due to a precise and compact representation of Boolean formulas. On both real and synthetic formulas, CNNSAT is highly accurate and orders of magnitude faster than the state-of-the-art solver Z3. We also describe how to extend CNNSAT to predict satisfying assignments when it predicts a formula to be satisfiable.
1 INTRODUCTION
The Boolean satisfiability problem, or SAT, is a classical decision problem. Given a propositional formula φ, SAT needs to decide whether φ has a satisfying assignment to its variables. If the answer is yes, we say that the formula φ is satisfiable, or SAT for short. Otherwise, it is unsatisfiable, or UNSAT for short. For example, the formula (x1∨x2)∧(¬x1∨x2)∧(x1∨¬x2) is satisfiable when both x1 and x2 are true (i.e., x1 = x2 = true). Conversely, (x1 ∨x2)∧ (¬x1 ∨x2)∧ (x1 ∨¬x2)∧ (¬x1 ∨¬x2) cannot be satisfied by any of the possible assignments.
In other words, SAT asks whether the variables of the given Boolean formula φ can be consistently assigned the values true or false such that the formula evaluates to true. If this is the case, the formula is called satisfiable. On the other hand, if no such assignment exists, the formula is false for all possible variable assignments and is unsatisfiable.
SAT is a classical NP-complete problem, and in fact was the first problem proved NP-complete. Many hard problems naturally reduce to SAT, such as the traveling salesman problem (TSP) and clique detection. SAT has been extensively studied in the literature for decades because it is a foundational problem and has wide applications. The more general satisfiability modulo theories (SMT) can also be reduced to SAT solving.
SAT is one of the most investigated problems, and numerous heuristics exist to help speed up SAT solving. However, state-of-the-art solvers do not yet scale to large, difficult formulas, such as ones with hundreds of variables and thousands of clauses for uniform 3-SAT problems (Xu et al., 2012). This is because the search space for solutions increases exponentially w.r.t. the number of variables. Most search-based SAT solvers are based on the DPLL approach (Davis et al., 1962), but the search space, even reduced, is still intractable for very large formulas.
State-of-the-art methods for SAT adopt the Conflict-Driven Clause Learning (CDCL) (Silva & Sakallah, 1996; Bayardo Jr & Schrag, 1997) algorithm. This is a systematic search algorithm but employs various optimizations to improve efficiency. However, because the general problem is NP-complete, systematic search algorithms have exponential worst-case complexity, which limits the scalability of these methods.
There exists several pieces of previous work that try to use machine/deep learning methods to improve SAT solvers (Loreggia et al., 2016; Fei & Rompf, 2018), classify SAT/UNSAT (Bünz & Lamm, 2017; Grozea & Popescu, 2014; Devlin & O’Sullivan, 2008), or directly solve SAT instances (Selsam et al., 2018). However, all these approaches focus on SAT problems with a small number of variables.
In this paper, we introduce CNNSAT, a fast and accurate technique based on Convolutional Neural Networks (Krizhevsky et al., 2012) to predict both satisfiability and satisfying assignments. Evaluated on a data set containing 3-SAT problems with up to 410 variables, CNNSAT is able to predict SAT/UNSAT with more than 95% accuracy and orders of magnitude faster than state-of-the-art solvers. We also introduce optimizations to further improve CNNSAT’s scalability. As for the more general SMT problems, CNNSAT is able to predict their satisfiability with more than 73% accuracy.
2 PRELIMINARIES
Convolutional neural network (CNN). CNN is a class of deep, feed-forward artificial neural networks. Many successful applications of CNNs exist in image and sound processing. A CNN has an input and an output layer, as well as multiple hidden layers, which typically consist of convolutional layers, pooling layers, fully connected layers and normalization layers.
The SAT and 3-SAT problems. In Boolean logic, a formula is in conjunctive normal form (CNF) if it is a conjunction of one or more sub-formulas. For a CNF formula, each of its sub-formulas is called a clause, which is a disjunction of literals (i.e., variables or their negations). The clause-to-variable ratio of a CNF formula is defined as the ratio of the number of clauses over the number of variables.
Each SAT instance can be represented in CNF. A CNF formula has a satisfying assignment iff there exists at least one assignment for each variable in the formula such that the formula evaluates to true. The objective of SAT solving is to determine whether or not a given formula is satisfiable, and produce a satisfying assignment when the formula is satisfiable.
3-SAT is a special case of SAT where the number of literals in each clause is up to three. Generalizing 3-SAT, N -SAT requires all clauses having no more than N literals, and uniform N -SAT requires all clauses having exactly N literals. 3-SAT is also NP-complete, and, in general, N -SAT, for N > 2, can be reduced to 3-SAT.
The SMT problem Satisfiability Modulo Theories (SMT) refers to the problem of determining whether a first-order formula is satisfiable w.r.t. some logical theories. It is typically applied to the theory of real numbers, the theory of integers, and the theories of various data structures, such as lists, arrays, and bit vectors.
For brevity, hereafter if a SAT problem instance is in CNF, we refer to it as CNF. Otherwise, we still use SAT to refer to the more general case.
3 CNNSAT
This section presents the technical details behind our approach. In particular, it describes the representation that we introduce to encode CNF formulas, the architecture of our proposed neural network, and the method that we use to find satisfying assignments.
3.1 REPRESENTATION
A SAT problem has a simple syntactic structure and therefore can be encoded into a syntax-based representation such as an abstract syntax tree (AST). The semantics of propositional logic induces rich invariance that such syntactic representations would ignore, e.g., permutation and negation invariance (Selsam et al., 2018). Permutation invariance stipulates that the satisfiability of a SAT problem is not affected by swapping the variables (e.g., swapping all occurrences of x1 with those of x2 in the SAT instance). Negation invariance means that negating every literal corresponding to a given variable (e.g., replacing xi by ¬xi, and ¬xi by xi for any variable xi in the SAT instance). As noted by Selsam et al. (2018), syntax-based representations do not capture the semantics of SAT problems. In other words, they cannot identify even the simplest semantic equivalence among SAT problems, such as permutation and negation invariance discussed earlier. On the other hand, even though syntax-based representations may not accurately capture semantic equivalence, sufficient amount of training data may allow neural networks to learn and predict the semantics of SAT formulas. Our evaluation in Section 5 confirms this hypothesis. In addition, for certain applications, most CNFs do not share the same/similar semantics. Therefore, we adopt a syntax-based representation to balance accuracy and scalability.
SAT Representation. Although a SAT problem can be represented in different forms, we choose the most common CNF format. Each clause in a CNF formula φ is represented by a vector v, where v = 〈e1, e2, ..., en〉, and the dimension of v, n, corresponds to the number of variables in φ. For each element ei in the vector, we set it to 0 if the corresponding variable xi does not occur in the clause, -1 if the variable xi is negated, and 1 otherwise (i.e., when the literal xi appears in the clause). Collectively, the vectors for φ’s clauses form an m× n matrix, where m is the number of clauses and n the number of variables.
Figure 1 shows an example to illustrate this representation. The CNF formula is shown in the left sub-figure, while the representation is shown in the right sub-figure. The first line in the left sub-figure (p CNF 5 6) indicates that the CNF has 5 variables and 6 clauses. The other rows in the left sub-figure is in the format 〈vi1 vi2 vi3〉, where i is the i-th clause, vij is a literal (i.e., j-th variable or its negation) in the clause — a negative value indicates that the variable is negated. The actual CNF formula is
(x1∨x2∨x3)∧(x2∨¬x3∨x4)∧(x1∨¬x2∨¬x3)∧(x1∨x2∨x4)∧(x3∨¬x4∨¬x5)∧(¬x3∨x4∨x5)
From the table, we can see that the representation in the right sub-figure encodes all the values of the variables into corresponding values in the matrix.
This representation is straightforward and the conversion is efficient. Note that this is a sparse matrix because only a small number of elements in each row are nonzero. However, we observe that, in practice, a SAT problem may have as many as millions of variables and clauses. At such a large scale, these SAT problems cannot fit in memory. Therefore, we propose a compact representation to improve scalability.
Our core idea is to split a matrix into smaller sub-matrices and summarize information for each sub-matrix. First, we define a fixed size sliding window. Then, we split the original matrix into sub-matrices according to the size of the original matrix and the sliding window. For each submatrix, ri = 〈pi, ni〉 is a compact representation for the i-th sub-matrix, where pi is the number of positive values in the sub-matrix and ni the number of negative values. Therefore, each sub-matrix is converted to a list with 2 elements. It is worth noting that when the size of the sliding window is 1 × 1, it retains the exact information in the original matrix. In the next section, we introduce additional optimizations for the compact matrix for better performance. Our experimental evaluation shows that this representation can accurately capture semantic equivalence.
SMT Representation. There are several straightforward representations for SAT problems. In contrast, representing SMT problems is more challenging. Although we can design custom representations for SMT, we choose to translate SMT problems to SAT problems so that we can leverage our representation of SAT problems to also encode SMT problems.
3.2 NETWORK ARCHITECTURE
Figure 2 depicts the architecture of our proposed neural network which uses three convolution layers for CNN. The first layer of our network aims at reducing the scale of the input matrix because this matrix can still be too large to fit in memory even for the compact representation. The last two layers are used for building neural networks.
For convolution layers whose stride is one, the size of the output after one layer is sightly smaller than the size of the input. The output size depends on the kernel size. Therefore, the scalability of this model is poor if the size of the input is large. In order to tackle this challenge, we add the first
Figure 2: Network Architecture
Conv Layer with NxM kerne l size R E L U
C N F
Matrix 2x2 Max poo ling
Conv Layer with 5x5 kerne l size R E L U 2x2 Max poo ling
Conv Layer with 3x3 kerne l size R E L U
F C
Result 2x2 Max poo ling
N and M
Algorithm 1: Solving_CNF Input: φ, N Output: Result
1 res := predictCNF(φ); 2 if res = UNSAT then 3 return UNSAT;
4 assignment := []; 5 predTimes := 0; 6 index := 0; 7 predLists := new map(); 8 while index < NumberOfVar(φ) do 9 assign := random([true, false]);
10 newCNF := assignVar(φ, assign, index); 11 // res is a structure with 〈label, probability〉 12 res := predictCNF(newCNF); 13 predLists.insert(newCNF, res);
14 newCNF := chooseTopNProb(φ, predLists, N); 15 assignment := solver.solve(newCNF); 16 if assignment = SAT then 17 return contructAssign(assignment, predLists, N);
18 return UNKNOWN;
layer, whose goal is to shrink each input matrix into a fixed size matrix by choosing a specific stride and kernel size. At the high-level, we first split an input matrix into a fixed number of sub-matrices (e.g., 100× 100). N and M are determined by the input matrix. Then, we extract the features of each sub-matrix and use them to form a new matrix. In this way, we are able to process matrices of any size, and the only requirement is that the input matrix should pass the first layer.
After the first convolution layer, the size of the matrix is fixed (e.g., 100 × 100). We then build three pooling layers and two other convolution layers. The last layer is a fully-connected layer that computes the scores.
3.3 SAT SOLVING
In order to solve a CNF formula instead of only predicting whether it is SAT or UNSAT, we simplify the CNF formula by guessing a satisfying assignment. We predict an assignment as follows. First, we construct new CNF formulas by assigning random values (i.e., true or false) to variables, and thus construct new matrices. We then feed these new matrices to the trained model and analyze the prediction results. We choose a specific number of assignments based on prediction probabilities (i.e., confidence). Next, we use an off-the-shelf solver to find assignments for the rest of the variables. Finally, we combine the two types of assignments to construct a final assignment.
Algorithm 1 shows the steps we use for solving CNF formuals. The input is a formula instead of a compressed matrix, which limits the scalability of satisfiability solving. First, we do not solve
formulas that our CNN model predicts to be UNSAT (Lines 1-3). We assign random values (true or false) to the variables and use our model to predict them (Lines 10-12). Note that we assign variables one by one based on the order of variables (Line 8). Then, we store the result and the new CNF (Line 13). After obtaining prediction results for all variables, we select a specific number (i.e., N) of predicted variables ranked by probability (Line 14). Reducing the original CNF formula with these partial assignments yields a new, simplified CNF formula, which is fed to an existing solver (Line 15). At the end, we merge the predicted partial assignment with the solver result to construct an assignment if the solver finds a satisfiable assignment (Lines 16-17). Otherwise we regard the formula as UNKNOWN (Line 18).
Consider, for example, an input CNF formula (x1∨x2)∧ (¬x1∨x2)∧ (x1∨¬x2). First, assume that we assign false to x1, which leads to the new, simplified CNF formula: (x2) ∧ (¬x2). We feed this formula to our model, and let us assume that it predicts the formula to be SAT with 80% probability. Next, we try x2 as true, the CNF formula simplifies to (x1). If the prediction is SAT with 90% probability and the N is 1, then we assign true to x2 and use a solver to resolve (x1). The solver returns the satisfying assignment that x1 = true. With these two pieces of variable assignment information, we derive the satisfying assignment {x1 = true, x2 = true} for the original CNF formula. Note that if N were chosen to be 2, the combined variable assignment is not a satisfying assignment. We choose to determine N dynamically based on the dataset.
4 DATASETS
We use CNFgen (Lauria et al., 2017) to generate CNF formulas in the DIMACS format. It generates combinatorial, challenging problems for SAT solvers. CNFgen is also able to generate different problems. For this work, we restricted CNFgen to generate random 3-SAT instances whose number of variables and number of clauses are configurable.
We generate two kinds of datasets, Long Range and Separated. The number of variables for Long Range ranges from 10 to 410 and the clause-variable ratio ranges from 4 to 8. It takes longer time for solvers to solve CNFs with more than 400 variables and 8 clause-variable ratio. We generate 16, 000 random CNFs.
The second dataset Separated is used to test the ability of CNNSAT when predicting CNFs with three smaller datasets. The data set consists of three sub-datasets: (1) a small dataset whose number of variables ranges from 12 to 30, (2) a medium dataset whose number of variables ranges from 130 to 160, and (3) a large dataset whose number of variables is between 300 and 330. The clause-variable ratio still ranges from 4 to 8. There are 95, 000 CNF formulas in this dataset.
We use 75% of the whole dataset for training and the rest of them for testing. A dataset should contain a relatively balanced distribution of satisfiable and unsatisfiable instances, and cannot be made from instances that are all in the same class. The ratio of SAT to UNSAT is 9637:6357 in Long Range and the ratio of SAT to UNSAT is 5604:3896 in Separated.
Figure 3a depicts the number of clauses in the different datasets. Figure 3b shows the distribution of the number of variables in the different datasets. Long Range is a dataset that is unbiased w.r.t. the number of variables, but Separated is not. The goal of the Separated dataset is to compare the behavior of networks with balanced and unbalanced datasets.
Figures 4a and 4b show the distribution of SAT and UNSAT instances in the different datasets. The number of SAT and UNSAT instances in these datasets is nearly evenly distributed across different ranges of variables. Note that the number of variables is not evenly distributed (Figure 4b) because we would also like to evaluate the performance of CNNSAT when the dataset is not evenly distributed by the number of variables.
Finally, we construct our SMT dataset from the SMT benchmarks provided by SMT (2018). We choose two theories: QF_BV and QF_IDL. As for predicting satisfiability for SMT problems, we use Z3 (De Moura & Bjørner, 2008) to convert them to SAT problems and use our model to predict satisfiability for these SAT problems.
5 EVALUATION
All our experiments run on a PC with the following hardware configuration: Intel(R) Core(TM) i7-7700 CPU @ 3.60GHz, 16GB memory and the GPU is GeForce 730 with 2GB memory. We have implemented CNNSAT based on TensorFlow with GPU support.
As discussed earlier, we use CNFgen (Lauria et al., 2017) to generate random 3-SAT problem instances in the DIMACS format. We use Z3 (De Moura & Bjørner, 2008) to convert SMT problems to SAT problems. PicoSAT (Biere, 2007) is used to help predict assignments for CNF formulas. We discard all SAT problems that cannot be solved by PicoSAT within a 10-minute budget. For each dataset, 75% of the data is used for training and the rest for testing.
5.1 PREDICTION RESULTS ON RANDOM 3-SAT PROBLEMS
Table 1 shows the summary results of our neural network on different datasets. We evaluated CNNSAT’s accuracy over the datasets with the 25% holdout setting, i.e., we trained our models on 75% of the data and tested on the remaining 25% data. We performed all experiments three times and computed the average performance over these three runs.
Table 1 shows CNNSAT’s accuracy on two datasets. The overall accuracy on the Long Range 3-SAT instances is 98.1%. The accuracy for SAT on SAT instances is 99.0%, and the accuracy for UNSAT on UNSAT instances is 97.0%. The accuracy for predicting satisfying assignments is 92.6%. The
overall accuracy for the Separated 3-SAT instances is 96.4%. The accuracy for SAT on the SAT instances is 97.7%, while the accuracy for UNSAT on the UNSAT instances is 94.3%. CNNSAT’s accuracy for predicting satisfying assignments is 91.4%.
As for the scalability of CNNSAT, we evaluated it from three aspects. First, we measure the time spent on predicting the satisfiability of CNF formulas. We use Z3, PicoSAT, MiniSAT (Sorensson & Een, 2005), Glucose (Audemard & Simon, 2009), Dimetheus (Gableske, 2013) and CaDiCaL (Biere, 2017) for comparison to evaluate CNNSAT’s efficiency. Due to space constraints, we only show the results for the two best performing solvers, MiniSAT and PicoSAT. The “Pred” means the time used when making predictions on the test data. Note that 1/4 of the CNF formulas were used for testing. “MiniSAT” and “PicoSAT” show the time that MiniSAT and PicoSAT spent on solving all the CNF formulas, respectively. The results show that CNNSAT clearly outperforms MiniSAT and “PicoSAT” by 1-2 orders of magnitude, making it practical for real-world use. “% of Imp on assign” denotes the percentage of improvement for our SAT solving algorithm compared to directly solving CNFs predicted as satisfiable using PicoSAT. We can observe that predicting speed for Long Range is improved when using our method. However, the performance for dataset Separated is decreased. The reason is that Separated contains less complicated CNFs and thus there is little improvement when CNNSAT could predict values for a part of the variables. In contrast, CNNSAT introduces additional overhead by predicting potential assignments.
5.2 EQUIVALENCE RESULTS
In this experiment, we evaluate two kinds of semantic equivalent operations, permutation invariance and negation invariance. For negation invariance, we generate datasets by negating half the variables. As for permutation invariance, we randomly choose two variables and swap them. For each CNF instance, we swap variables bN/2c times, whereN is the number of variables. For the two operations, we evaluate them three times and average the results.
Table 2 shows the results. We can see that CNNSAT predicts SAT/UNSAT with high accuracy. The corresponding accuracy is close to the original dataset in Table 1. The % of difference shows the percent of differences in individual predictions. The evaluation results show that CNNSAT is able to capture the semantics of SAT problems.
5.3 ACCURACY ON SMT BENCHMARKS
Table 3 shows the accuracy of CNNSAT on SMT benchmarks. The timeout for each phase is also 10 minutes. “CNV time” stands for how much time it takes to convert SMT problems to SAT problems. In our experiment, Z3 may convert an SMT to an empty SAT whose number of variable is zero or one. We ignore these trivial SAT instances.
We can see from the table that CNNSAT is able to predict them with more than 73% accuracy. In addition, CNNSAT is 1-2 orders faster than Z3.
5.4 DISCUSSIONS
Sparse Convolutional Neural Network. We use traditional CNN for CNNSAT, and construct a matrix based on CNF. However, it is clear that the matrix is sparse. In fact, for 3-SAT problems, the matrices are very sparse and most elements in these matrices are zero. However, we have not found sparse CNNs that best fit our scenario. Graham & van der Maaten (2017) present the Submanifold Sparse Convolutional Networks but since the matrices in our setting is not submanifold, it does not fit our representation.
Guiding SAT solvers. Most state-of-the-art SAT solvers implement Conflict-Driven Clause Learning (CDCL) (Silva & Sakallah, 1996; Bayardo Jr & Schrag, 1997). In CDCL, it continues selecting a variable and assigning true or false, and try to find conflict until all variable values are assigned. CNNSAT may improve its performance by trying to assign a variable the value leading the formula to SAT. Although the performance is not improved when a formula is UNSAT, it may improve performance when a formula is SAT. The performance can also be improved by learning the strategy that guiding the selection to choose a conflicting assignment.
6 RELATED WORK
Bello et al. (2017) present a framework to tackle combinatorial optimization problems using neural networks and reinforcement learning. They also apply it to other NP-hard problems such as traveling salesman problem and KnapSack. It shows performance improvement compared to standard algorithmic methods.
Fei & Rompf (2018) propose another avenue for SAT. They cast symbolic reasoning problems directly as gameplay to leverage the full decision-making power of neural networks through deep reinforcement learning. Most SAT solvers are based on the Conflict Driven Clause Learning (CDCL) algorithm, which is a typical symbolic reasoning process that can be cast as a game of controlling the branching decisions. The results show that this method can obtain better performance.
Xu et al. (2012) show that 70% classification accuracy can be obtained based on phase transition features on uniform-random 3-SAT formulas. CNNSAT’s prediction accuracy is significantly higher under a similar experimental setup. In addition, phase transition features vary on different kinds of formulas, and thus a significant performance drop is expected on SAT instances converted from SMT formulas.
NeuroSAT (Selsam et al., 2018) uses an undirected graph to represent CNFs and builds a model by two vectors, three multilayer perceptrons and two layer-norm LSTMs. However, it needs to generate certain type of pairs to model SAT. In each pair, one element is satisfiable, the other is unsatisfiable, and the two differ by negating only a single literal occurrence in a single clause. Therefore, the training data is constrained by this requirement, which means for some data like uniform 3-SAT, it takes significant amount of time to generate the training data. In contrast, for CNNSAT, any training data is useful. NeuroSAT is unable to precisely predict satisfiability when the number of variables is large. Bünz & Lamm (2017) propose a method based on Graph Neural Network that is able to classify SATs with around 60% validation error. The representation is similar to NeuroSAT, which uses graphs to represent CNFs.
Feature-based machine learning methods Devlin & O’Sullivan (2008); Grozea & Popescu (2014) can also classify SATs. Grozea & Popescu (2014) aim to empirically test the ability of machine learning models to act as decision oracles for NP problems. They only evaluated the idea on formulas with up to 100 variables. The approach does not scale to formulas with more variables, such as those large formulas considered in this paper. Devlin & O’Sullivan (2008) view the satisfiability problem as a classification task. Based on easy to compute structural features of instances of large satisfiability
problems, they use a variety of standard classifier learners to classify previously unseen instances of the satisfiability problem as either SAT or UNSAT. The accuracy for classification is more than 90%. In comparison, CNNSAT can predict variable assignments and handle much larger formulas.
7 CONCLUSION
In this paper, we have introduced a new fast and accurate approach for solving SAT problems via Convolutional Neural Networks. We have described how we represent SAT instances, how we design our proposed neural network, how we optimize our technique for scalability, and our extensive evaluation to show CNNSAT’s high accuracy and scalability on large SAT and SMT problem instances. Because CNNSAT’s effectiveness, it may find interesting applications in domains that require fast SAT and SMT solving, such as software analysis and verification, symbolic execution, planning and scheduling, and combinatorial design. | 1. What are the strengths and weaknesses of the proposed method for predicting SAT instances' satisfiability using CNNs?
2. Do you have any concerns regarding the comparison between PicoSAT and Z3?
3. How does the reviewer assess the paper's presentation and claims regarding SAT solvers and their performance?
4. What are some missing details or experiments that could improve the paper's content related to the exchangeability property and CNN usage?
5. How does the reviewer evaluate the paper's practical usefulness and competitiveness in SAT competition instances? | Review | Review
The authors present CNNSAT, a CNN-based approach to predict the satisfiability of SAT instances.
The problem is very relevant, and the approach is interesting, but unfortunately, the presentation is very misleading (see details below). In terms of methods, the main innovation appears to be the use of CNNs for predicting the solubility of SAT instances, but because of the exchangeability I don't actually see the intuition for this (see details below). Overall, because of these issues, I do not think this paper is ready for publication.
Misleading parts:
=================
1. Already in the abstract the authors make an utterly wrong statement about SAT solvers:
"State-of-the-art solvers exist and have found a wide range of applications. However, they still do not scale well to formulas with hundreds of variables."
The same sentiment is repeated in the introduction; I'm puzzled why the authors would believe this. SAT solvers nowadays are routinely used on instances with hundreds of thousands of variables and millions of clauses (just see any of the recent SAT competitions (https://satcompetition.org/) for examples).
2. Z3 is *not* a good solver for random instances, far from state-of-the-art. It is not the right baseline.
3. The authors' approach implicitly makes use of PicoSAT, so their experiments are really implicitly comparing PicoSAT vs. Z3.
4. Comparing the prediction time of CNNSAT with the solving time of Z3 does not make much sense to me, since it does not solve every instance.
5. No comparison of CNNSAT (internally using PicoSAT) vs. PicoSAT is given.
6. The paper did *not* demonstrate that CNNSAT is competitive in practice. It is only faster than Z3 on random instances, which are not of practical interest. Demonstrating practical usefulness would require competitive performance on the SAT competition instances. By the way, CNNSAT would be disqualified in any SAT competition for falsely returning UNSAT for some satisfiable instances. Algorithm 1 should instead return UNKNOWN when it cannot find a solution.
7. Taken out of context, the predictive quality the authors achieve looks great: between 96% and 99% for random 3-SAT instances. However, this is misleading since the instances are not sampled at the phase transition and may thus be very easy to classify. Usually, for uniform random 3-SAT, the phase transition happens when the number of clauses for a number of variables v exceeds c = 4.258 * v +58.26 * v^{−2/3} (see [1]), although I do not remember whether this is for clauses being generated with and without replacement. (I looked into the documentation of CNFGEN, and for random k-cnf, it samples clauses without replacement.) It would be very useful to see the classification accuracy of classifying every formula with >= the number of clauses c from that formula as unsatisfiable and every formula with < that many clauses as satisfiable. Could the authors please report this number during the author response period?
The authors are also missing an additional related paper: [2] used simple models to obtain better-than-chance predictions at the phase transition.
Exchangeability and the use of CNNs:
====================================
Due to the exchangeability property, we do *not* care about spatial correlation in the adjacency matrix. I am really missing the details on how to achieve the fixed-size 100x100 matrices. This approach sounds like it would lose a lot of information!
I do not find the experiment studying exchangeability to be convincing. The experiment I would like to see is shuffling all variables, and/or negating half the variables, rather than swapping a single pair of variables. Even then, the experiment should optimally measure differences in individual predictions rather than differences in aggregate performance statistics.
One more question:
- How was N chosen? I only saw the statement "We choose to determine N dynamically based on the dataset."
[1] Crawford and Auton: Experimental results on the crossover point in random 3SAT. In Artificial Intelligence Journal, 1996.
[2] Xu, Hoos, and Leyton-Brown: Predicting Satisfiability at the Phase Transition. In AAAI 2012. |
ICLR | Title
CNNSAT: Fast, Accurate Boolean Satisfiability using Convolutional Neural Networks
Abstract
Boolean satisfiability (SAT) is one of the most well-known NP-complete problems and has been extensively studied. State-of-the-art solvers exist and have found a wide range of applications. However, they still do not scale well to formulas with hundreds of variables for uniform 3-SAT problems. To tackle this fundamental scalability challenge, we introduce CNNSAT, a fast and accurate statistical decision procedure for SAT based on convolutional neural networks. CNNSAT’s effectiveness is due to a precise and compact representation of Boolean formulas. On both real and synthetic formulas, CNNSAT is highly accurate and orders of magnitude faster than the state-of-the-art solver Z3. We also describe how to extend CNNSAT to predict satisfying assignments when it predicts a formula to be satisfiable.
N/A
Boolean satisfiability (SAT) is one of the most well-known NP-complete problems and has been extensively studied. State-of-the-art solvers exist and have found a wide range of applications. However, they still do not scale well to formulas with hundreds of variables for uniform 3-SAT problems. To tackle this fundamental scalability challenge, we introduce CNNSAT, a fast and accurate statistical decision procedure for SAT based on convolutional neural networks. CNNSAT’s effectiveness is due to a precise and compact representation of Boolean formulas. On both real and synthetic formulas, CNNSAT is highly accurate and orders of magnitude faster than the state-of-the-art solver Z3. We also describe how to extend CNNSAT to predict satisfying assignments when it predicts a formula to be satisfiable.
1 INTRODUCTION
The Boolean satisfiability problem, or SAT, is a classical decision problem. Given a propositional formula φ, SAT needs to decide whether φ has a satisfying assignment to its variables. If the answer is yes, we say that the formula φ is satisfiable, or SAT for short. Otherwise, it is unsatisfiable, or UNSAT for short. For example, the formula (x1∨x2)∧(¬x1∨x2)∧(x1∨¬x2) is satisfiable when both x1 and x2 are true (i.e., x1 = x2 = true). Conversely, (x1 ∨x2)∧ (¬x1 ∨x2)∧ (x1 ∨¬x2)∧ (¬x1 ∨¬x2) cannot be satisfied by any of the possible assignments.
In other words, SAT asks whether the variables of the given Boolean formula φ can be consistently assigned the values true or false such that the formula evaluates to true. If this is the case, the formula is called satisfiable. On the other hand, if no such assignment exists, the formula is false for all possible variable assignments and is unsatisfiable.
SAT is a classical NP-complete problem, and in fact was the first problem proved NP-complete. Many hard problems naturally reduce to SAT, such as the traveling salesman problem (TSP) and clique detection. SAT has been extensively studied in the literature for decades because it is a foundational problem and has wide applications. The more general satisfiability modulo theories (SMT) can also be reduced to SAT solving.
SAT is one of the most investigated problems, and numerous heuristics exist to help speed up SAT solving. However, state-of-the-art solvers do not yet scale to large, difficult formulas, such as ones with hundreds of variables and thousands of clauses for uniform 3-SAT problems (Xu et al., 2012). This is because the search space for solutions increases exponentially w.r.t. the number of variables. Most search-based SAT solvers are based on the DPLL approach (Davis et al., 1962), but the search space, even reduced, is still intractable for very large formulas.
State-of-the-art methods for SAT adopt the Conflict-Driven Clause Learning (CDCL) (Silva & Sakallah, 1996; Bayardo Jr & Schrag, 1997) algorithm. This is a systematic search algorithm but employs various optimizations to improve efficiency. However, because the general problem is NP-complete, systematic search algorithms have exponential worst-case complexity, which limits the scalability of these methods.
There exists several pieces of previous work that try to use machine/deep learning methods to improve SAT solvers (Loreggia et al., 2016; Fei & Rompf, 2018), classify SAT/UNSAT (Bünz & Lamm, 2017; Grozea & Popescu, 2014; Devlin & O’Sullivan, 2008), or directly solve SAT instances (Selsam et al., 2018). However, all these approaches focus on SAT problems with a small number of variables.
In this paper, we introduce CNNSAT, a fast and accurate technique based on Convolutional Neural Networks (Krizhevsky et al., 2012) to predict both satisfiability and satisfying assignments. Evaluated on a data set containing 3-SAT problems with up to 410 variables, CNNSAT is able to predict SAT/UNSAT with more than 95% accuracy and orders of magnitude faster than state-of-the-art solvers. We also introduce optimizations to further improve CNNSAT’s scalability. As for the more general SMT problems, CNNSAT is able to predict their satisfiability with more than 73% accuracy.
2 PRELIMINARIES
Convolutional neural network (CNN). CNN is a class of deep, feed-forward artificial neural networks. Many successful applications of CNNs exist in image and sound processing. A CNN has an input and an output layer, as well as multiple hidden layers, which typically consist of convolutional layers, pooling layers, fully connected layers and normalization layers.
The SAT and 3-SAT problems. In Boolean logic, a formula is in conjunctive normal form (CNF) if it is a conjunction of one or more sub-formulas. For a CNF formula, each of its sub-formulas is called a clause, which is a disjunction of literals (i.e., variables or their negations). The clause-to-variable ratio of a CNF formula is defined as the ratio of the number of clauses over the number of variables.
Each SAT instance can be represented in CNF. A CNF formula has a satisfying assignment iff there exists at least one assignment for each variable in the formula such that the formula evaluates to true. The objective of SAT solving is to determine whether or not a given formula is satisfiable, and produce a satisfying assignment when the formula is satisfiable.
3-SAT is a special case of SAT where the number of literals in each clause is up to three. Generalizing 3-SAT, N -SAT requires all clauses having no more than N literals, and uniform N -SAT requires all clauses having exactly N literals. 3-SAT is also NP-complete, and, in general, N -SAT, for N > 2, can be reduced to 3-SAT.
The SMT problem Satisfiability Modulo Theories (SMT) refers to the problem of determining whether a first-order formula is satisfiable w.r.t. some logical theories. It is typically applied to the theory of real numbers, the theory of integers, and the theories of various data structures, such as lists, arrays, and bit vectors.
For brevity, hereafter if a SAT problem instance is in CNF, we refer to it as CNF. Otherwise, we still use SAT to refer to the more general case.
3 CNNSAT
This section presents the technical details behind our approach. In particular, it describes the representation that we introduce to encode CNF formulas, the architecture of our proposed neural network, and the method that we use to find satisfying assignments.
3.1 REPRESENTATION
A SAT problem has a simple syntactic structure and therefore can be encoded into a syntax-based representation such as an abstract syntax tree (AST). The semantics of propositional logic induces rich invariance that such syntactic representations would ignore, e.g., permutation and negation invariance (Selsam et al., 2018). Permutation invariance stipulates that the satisfiability of a SAT problem is not affected by swapping the variables (e.g., swapping all occurrences of x1 with those of x2 in the SAT instance). Negation invariance means that negating every literal corresponding to a given variable (e.g., replacing xi by ¬xi, and ¬xi by xi for any variable xi in the SAT instance). As noted by Selsam et al. (2018), syntax-based representations do not capture the semantics of SAT problems. In other words, they cannot identify even the simplest semantic equivalence among SAT problems, such as permutation and negation invariance discussed earlier. On the other hand, even though syntax-based representations may not accurately capture semantic equivalence, sufficient amount of training data may allow neural networks to learn and predict the semantics of SAT formulas. Our evaluation in Section 5 confirms this hypothesis. In addition, for certain applications, most CNFs do not share the same/similar semantics. Therefore, we adopt a syntax-based representation to balance accuracy and scalability.
SAT Representation. Although a SAT problem can be represented in different forms, we choose the most common CNF format. Each clause in a CNF formula φ is represented by a vector v, where v = 〈e1, e2, ..., en〉, and the dimension of v, n, corresponds to the number of variables in φ. For each element ei in the vector, we set it to 0 if the corresponding variable xi does not occur in the clause, -1 if the variable xi is negated, and 1 otherwise (i.e., when the literal xi appears in the clause). Collectively, the vectors for φ’s clauses form an m× n matrix, where m is the number of clauses and n the number of variables.
Figure 1 shows an example to illustrate this representation. The CNF formula is shown in the left sub-figure, while the representation is shown in the right sub-figure. The first line in the left sub-figure (p CNF 5 6) indicates that the CNF has 5 variables and 6 clauses. The other rows in the left sub-figure is in the format 〈vi1 vi2 vi3〉, where i is the i-th clause, vij is a literal (i.e., j-th variable or its negation) in the clause — a negative value indicates that the variable is negated. The actual CNF formula is
(x1∨x2∨x3)∧(x2∨¬x3∨x4)∧(x1∨¬x2∨¬x3)∧(x1∨x2∨x4)∧(x3∨¬x4∨¬x5)∧(¬x3∨x4∨x5)
From the table, we can see that the representation in the right sub-figure encodes all the values of the variables into corresponding values in the matrix.
This representation is straightforward and the conversion is efficient. Note that this is a sparse matrix because only a small number of elements in each row are nonzero. However, we observe that, in practice, a SAT problem may have as many as millions of variables and clauses. At such a large scale, these SAT problems cannot fit in memory. Therefore, we propose a compact representation to improve scalability.
Our core idea is to split a matrix into smaller sub-matrices and summarize information for each sub-matrix. First, we define a fixed size sliding window. Then, we split the original matrix into sub-matrices according to the size of the original matrix and the sliding window. For each submatrix, ri = 〈pi, ni〉 is a compact representation for the i-th sub-matrix, where pi is the number of positive values in the sub-matrix and ni the number of negative values. Therefore, each sub-matrix is converted to a list with 2 elements. It is worth noting that when the size of the sliding window is 1 × 1, it retains the exact information in the original matrix. In the next section, we introduce additional optimizations for the compact matrix for better performance. Our experimental evaluation shows that this representation can accurately capture semantic equivalence.
SMT Representation. There are several straightforward representations for SAT problems. In contrast, representing SMT problems is more challenging. Although we can design custom representations for SMT, we choose to translate SMT problems to SAT problems so that we can leverage our representation of SAT problems to also encode SMT problems.
3.2 NETWORK ARCHITECTURE
Figure 2 depicts the architecture of our proposed neural network which uses three convolution layers for CNN. The first layer of our network aims at reducing the scale of the input matrix because this matrix can still be too large to fit in memory even for the compact representation. The last two layers are used for building neural networks.
For convolution layers whose stride is one, the size of the output after one layer is sightly smaller than the size of the input. The output size depends on the kernel size. Therefore, the scalability of this model is poor if the size of the input is large. In order to tackle this challenge, we add the first
Figure 2: Network Architecture
Conv Layer with NxM kerne l size R E L U
C N F
Matrix 2x2 Max poo ling
Conv Layer with 5x5 kerne l size R E L U 2x2 Max poo ling
Conv Layer with 3x3 kerne l size R E L U
F C
Result 2x2 Max poo ling
N and M
Algorithm 1: Solving_CNF Input: φ, N Output: Result
1 res := predictCNF(φ); 2 if res = UNSAT then 3 return UNSAT;
4 assignment := []; 5 predTimes := 0; 6 index := 0; 7 predLists := new map(); 8 while index < NumberOfVar(φ) do 9 assign := random([true, false]);
10 newCNF := assignVar(φ, assign, index); 11 // res is a structure with 〈label, probability〉 12 res := predictCNF(newCNF); 13 predLists.insert(newCNF, res);
14 newCNF := chooseTopNProb(φ, predLists, N); 15 assignment := solver.solve(newCNF); 16 if assignment = SAT then 17 return contructAssign(assignment, predLists, N);
18 return UNKNOWN;
layer, whose goal is to shrink each input matrix into a fixed size matrix by choosing a specific stride and kernel size. At the high-level, we first split an input matrix into a fixed number of sub-matrices (e.g., 100× 100). N and M are determined by the input matrix. Then, we extract the features of each sub-matrix and use them to form a new matrix. In this way, we are able to process matrices of any size, and the only requirement is that the input matrix should pass the first layer.
After the first convolution layer, the size of the matrix is fixed (e.g., 100 × 100). We then build three pooling layers and two other convolution layers. The last layer is a fully-connected layer that computes the scores.
3.3 SAT SOLVING
In order to solve a CNF formula instead of only predicting whether it is SAT or UNSAT, we simplify the CNF formula by guessing a satisfying assignment. We predict an assignment as follows. First, we construct new CNF formulas by assigning random values (i.e., true or false) to variables, and thus construct new matrices. We then feed these new matrices to the trained model and analyze the prediction results. We choose a specific number of assignments based on prediction probabilities (i.e., confidence). Next, we use an off-the-shelf solver to find assignments for the rest of the variables. Finally, we combine the two types of assignments to construct a final assignment.
Algorithm 1 shows the steps we use for solving CNF formuals. The input is a formula instead of a compressed matrix, which limits the scalability of satisfiability solving. First, we do not solve
formulas that our CNN model predicts to be UNSAT (Lines 1-3). We assign random values (true or false) to the variables and use our model to predict them (Lines 10-12). Note that we assign variables one by one based on the order of variables (Line 8). Then, we store the result and the new CNF (Line 13). After obtaining prediction results for all variables, we select a specific number (i.e., N) of predicted variables ranked by probability (Line 14). Reducing the original CNF formula with these partial assignments yields a new, simplified CNF formula, which is fed to an existing solver (Line 15). At the end, we merge the predicted partial assignment with the solver result to construct an assignment if the solver finds a satisfiable assignment (Lines 16-17). Otherwise we regard the formula as UNKNOWN (Line 18).
Consider, for example, an input CNF formula (x1∨x2)∧ (¬x1∨x2)∧ (x1∨¬x2). First, assume that we assign false to x1, which leads to the new, simplified CNF formula: (x2) ∧ (¬x2). We feed this formula to our model, and let us assume that it predicts the formula to be SAT with 80% probability. Next, we try x2 as true, the CNF formula simplifies to (x1). If the prediction is SAT with 90% probability and the N is 1, then we assign true to x2 and use a solver to resolve (x1). The solver returns the satisfying assignment that x1 = true. With these two pieces of variable assignment information, we derive the satisfying assignment {x1 = true, x2 = true} for the original CNF formula. Note that if N were chosen to be 2, the combined variable assignment is not a satisfying assignment. We choose to determine N dynamically based on the dataset.
4 DATASETS
We use CNFgen (Lauria et al., 2017) to generate CNF formulas in the DIMACS format. It generates combinatorial, challenging problems for SAT solvers. CNFgen is also able to generate different problems. For this work, we restricted CNFgen to generate random 3-SAT instances whose number of variables and number of clauses are configurable.
We generate two kinds of datasets, Long Range and Separated. The number of variables for Long Range ranges from 10 to 410 and the clause-variable ratio ranges from 4 to 8. It takes longer time for solvers to solve CNFs with more than 400 variables and 8 clause-variable ratio. We generate 16, 000 random CNFs.
The second dataset Separated is used to test the ability of CNNSAT when predicting CNFs with three smaller datasets. The data set consists of three sub-datasets: (1) a small dataset whose number of variables ranges from 12 to 30, (2) a medium dataset whose number of variables ranges from 130 to 160, and (3) a large dataset whose number of variables is between 300 and 330. The clause-variable ratio still ranges from 4 to 8. There are 95, 000 CNF formulas in this dataset.
We use 75% of the whole dataset for training and the rest of them for testing. A dataset should contain a relatively balanced distribution of satisfiable and unsatisfiable instances, and cannot be made from instances that are all in the same class. The ratio of SAT to UNSAT is 9637:6357 in Long Range and the ratio of SAT to UNSAT is 5604:3896 in Separated.
Figure 3a depicts the number of clauses in the different datasets. Figure 3b shows the distribution of the number of variables in the different datasets. Long Range is a dataset that is unbiased w.r.t. the number of variables, but Separated is not. The goal of the Separated dataset is to compare the behavior of networks with balanced and unbalanced datasets.
Figures 4a and 4b show the distribution of SAT and UNSAT instances in the different datasets. The number of SAT and UNSAT instances in these datasets is nearly evenly distributed across different ranges of variables. Note that the number of variables is not evenly distributed (Figure 4b) because we would also like to evaluate the performance of CNNSAT when the dataset is not evenly distributed by the number of variables.
Finally, we construct our SMT dataset from the SMT benchmarks provided by SMT (2018). We choose two theories: QF_BV and QF_IDL. As for predicting satisfiability for SMT problems, we use Z3 (De Moura & Bjørner, 2008) to convert them to SAT problems and use our model to predict satisfiability for these SAT problems.
5 EVALUATION
All our experiments run on a PC with the following hardware configuration: Intel(R) Core(TM) i7-7700 CPU @ 3.60GHz, 16GB memory and the GPU is GeForce 730 with 2GB memory. We have implemented CNNSAT based on TensorFlow with GPU support.
As discussed earlier, we use CNFgen (Lauria et al., 2017) to generate random 3-SAT problem instances in the DIMACS format. We use Z3 (De Moura & Bjørner, 2008) to convert SMT problems to SAT problems. PicoSAT (Biere, 2007) is used to help predict assignments for CNF formulas. We discard all SAT problems that cannot be solved by PicoSAT within a 10-minute budget. For each dataset, 75% of the data is used for training and the rest for testing.
5.1 PREDICTION RESULTS ON RANDOM 3-SAT PROBLEMS
Table 1 shows the summary results of our neural network on different datasets. We evaluated CNNSAT’s accuracy over the datasets with the 25% holdout setting, i.e., we trained our models on 75% of the data and tested on the remaining 25% data. We performed all experiments three times and computed the average performance over these three runs.
Table 1 shows CNNSAT’s accuracy on two datasets. The overall accuracy on the Long Range 3-SAT instances is 98.1%. The accuracy for SAT on SAT instances is 99.0%, and the accuracy for UNSAT on UNSAT instances is 97.0%. The accuracy for predicting satisfying assignments is 92.6%. The
overall accuracy for the Separated 3-SAT instances is 96.4%. The accuracy for SAT on the SAT instances is 97.7%, while the accuracy for UNSAT on the UNSAT instances is 94.3%. CNNSAT’s accuracy for predicting satisfying assignments is 91.4%.
As for the scalability of CNNSAT, we evaluated it from three aspects. First, we measure the time spent on predicting the satisfiability of CNF formulas. We use Z3, PicoSAT, MiniSAT (Sorensson & Een, 2005), Glucose (Audemard & Simon, 2009), Dimetheus (Gableske, 2013) and CaDiCaL (Biere, 2017) for comparison to evaluate CNNSAT’s efficiency. Due to space constraints, we only show the results for the two best performing solvers, MiniSAT and PicoSAT. The “Pred” means the time used when making predictions on the test data. Note that 1/4 of the CNF formulas were used for testing. “MiniSAT” and “PicoSAT” show the time that MiniSAT and PicoSAT spent on solving all the CNF formulas, respectively. The results show that CNNSAT clearly outperforms MiniSAT and “PicoSAT” by 1-2 orders of magnitude, making it practical for real-world use. “% of Imp on assign” denotes the percentage of improvement for our SAT solving algorithm compared to directly solving CNFs predicted as satisfiable using PicoSAT. We can observe that predicting speed for Long Range is improved when using our method. However, the performance for dataset Separated is decreased. The reason is that Separated contains less complicated CNFs and thus there is little improvement when CNNSAT could predict values for a part of the variables. In contrast, CNNSAT introduces additional overhead by predicting potential assignments.
5.2 EQUIVALENCE RESULTS
In this experiment, we evaluate two kinds of semantic equivalent operations, permutation invariance and negation invariance. For negation invariance, we generate datasets by negating half the variables. As for permutation invariance, we randomly choose two variables and swap them. For each CNF instance, we swap variables bN/2c times, whereN is the number of variables. For the two operations, we evaluate them three times and average the results.
Table 2 shows the results. We can see that CNNSAT predicts SAT/UNSAT with high accuracy. The corresponding accuracy is close to the original dataset in Table 1. The % of difference shows the percent of differences in individual predictions. The evaluation results show that CNNSAT is able to capture the semantics of SAT problems.
5.3 ACCURACY ON SMT BENCHMARKS
Table 3 shows the accuracy of CNNSAT on SMT benchmarks. The timeout for each phase is also 10 minutes. “CNV time” stands for how much time it takes to convert SMT problems to SAT problems. In our experiment, Z3 may convert an SMT to an empty SAT whose number of variable is zero or one. We ignore these trivial SAT instances.
We can see from the table that CNNSAT is able to predict them with more than 73% accuracy. In addition, CNNSAT is 1-2 orders faster than Z3.
5.4 DISCUSSIONS
Sparse Convolutional Neural Network. We use traditional CNN for CNNSAT, and construct a matrix based on CNF. However, it is clear that the matrix is sparse. In fact, for 3-SAT problems, the matrices are very sparse and most elements in these matrices are zero. However, we have not found sparse CNNs that best fit our scenario. Graham & van der Maaten (2017) present the Submanifold Sparse Convolutional Networks but since the matrices in our setting is not submanifold, it does not fit our representation.
Guiding SAT solvers. Most state-of-the-art SAT solvers implement Conflict-Driven Clause Learning (CDCL) (Silva & Sakallah, 1996; Bayardo Jr & Schrag, 1997). In CDCL, it continues selecting a variable and assigning true or false, and try to find conflict until all variable values are assigned. CNNSAT may improve its performance by trying to assign a variable the value leading the formula to SAT. Although the performance is not improved when a formula is UNSAT, it may improve performance when a formula is SAT. The performance can also be improved by learning the strategy that guiding the selection to choose a conflicting assignment.
6 RELATED WORK
Bello et al. (2017) present a framework to tackle combinatorial optimization problems using neural networks and reinforcement learning. They also apply it to other NP-hard problems such as traveling salesman problem and KnapSack. It shows performance improvement compared to standard algorithmic methods.
Fei & Rompf (2018) propose another avenue for SAT. They cast symbolic reasoning problems directly as gameplay to leverage the full decision-making power of neural networks through deep reinforcement learning. Most SAT solvers are based on the Conflict Driven Clause Learning (CDCL) algorithm, which is a typical symbolic reasoning process that can be cast as a game of controlling the branching decisions. The results show that this method can obtain better performance.
Xu et al. (2012) show that 70% classification accuracy can be obtained based on phase transition features on uniform-random 3-SAT formulas. CNNSAT’s prediction accuracy is significantly higher under a similar experimental setup. In addition, phase transition features vary on different kinds of formulas, and thus a significant performance drop is expected on SAT instances converted from SMT formulas.
NeuroSAT (Selsam et al., 2018) uses an undirected graph to represent CNFs and builds a model by two vectors, three multilayer perceptrons and two layer-norm LSTMs. However, it needs to generate certain type of pairs to model SAT. In each pair, one element is satisfiable, the other is unsatisfiable, and the two differ by negating only a single literal occurrence in a single clause. Therefore, the training data is constrained by this requirement, which means for some data like uniform 3-SAT, it takes significant amount of time to generate the training data. In contrast, for CNNSAT, any training data is useful. NeuroSAT is unable to precisely predict satisfiability when the number of variables is large. Bünz & Lamm (2017) propose a method based on Graph Neural Network that is able to classify SATs with around 60% validation error. The representation is similar to NeuroSAT, which uses graphs to represent CNFs.
Feature-based machine learning methods Devlin & O’Sullivan (2008); Grozea & Popescu (2014) can also classify SATs. Grozea & Popescu (2014) aim to empirically test the ability of machine learning models to act as decision oracles for NP problems. They only evaluated the idea on formulas with up to 100 variables. The approach does not scale to formulas with more variables, such as those large formulas considered in this paper. Devlin & O’Sullivan (2008) view the satisfiability problem as a classification task. Based on easy to compute structural features of instances of large satisfiability
problems, they use a variety of standard classifier learners to classify previously unseen instances of the satisfiability problem as either SAT or UNSAT. The accuracy for classification is more than 90%. In comparison, CNNSAT can predict variable assignments and handle much larger formulas.
7 CONCLUSION
In this paper, we have introduced a new fast and accurate approach for solving SAT problems via Convolutional Neural Networks. We have described how we represent SAT instances, how we design our proposed neural network, how we optimize our technique for scalability, and our extensive evaluation to show CNNSAT’s high accuracy and scalability on large SAT and SMT problem instances. Because CNNSAT’s effectiveness, it may find interesting applications in domains that require fast SAT and SMT solving, such as software analysis and verification, symbolic execution, planning and scheduling, and combinatorial design. | 1. What is the novelty and significance of the proposed method in solving large boolean formulas?
2. How effective is the proposed approach compared to existing state-of-the-art solvers?
3. Are there any concerns regarding the experimental evaluation and claims made by the current submission?
4. How does the reviewer assess the quality and reproducibility of the results?
5. Is there any suggestion for future research to improve the accuracy and practical applications of the proposed method? | Review | Review
[Second Update] I still find the method proposed in this paper appealing, and think that it may have practical applications in addition to providing significant research contributions. A key question that was raised by the other two reviewers was whether the proposed approach was fairly evaluated against existing state-of-the-art solvers. The authors have responded to these concerns by adding clarifications and new baselines to their paper. However, based on the discussions to date, I feel that I am not sufficiently familiar with related work on SAT solvers to say whether the other reviewers' concerns have been fully addressed. If they have been, I'd strongly lean towards accepting the paper. As for the concerns from my original review: the transferability experiments reported in the author comments below are quite informative, and I'd encourage the authors to incorporate them into the paper (or an appendix if space is an issue). I'd also encourage the author to incorporate the full comparisons against Z3, PicoSAT, MiniSAT, Glucose, Dimetheus, and CaDiCaL from Section 5.1. (I've updated my rating for the paper from 5 to 6, and my confidence score from 3 to 2.)
[First Update] Based on the feedback of the other two reviewers, I believe that I was missing some important context about SAT solvers when I wrote my initial review. Reviewer 1 and Reviewer 2 both raised serious concerns about the types of SAT instances that were used to evaluate the experimental setup, as well as about the use of Z3 as a baseline for solving random SAT instances. (No author response was provided.) Given this additional information, I've lowered my score for the paper from an 8 to a 5. I do think that the approach is interesting, but have reservations about the experimental evaluation and the claims made by the current submission. (Note: As the paper authors point out in the comment below, this update was mistakenly submitted a few days before the end of the rebuttal period.)
[Original Title] Interesting idea, impressive results for a first paper
[Summary] The authors propose a method of using convolutional neural networks to determine whether large boolean formulas (containing hundreds of variables and thousands of clauses) are satisfiable. The resulting models are accurate (correctly distinguishing between satisfiable and unsatisfiable formulas more between 90% and 99% of the time, depending on the dataset) while taking 10x - 100x less time than an off-the-shelf solver (Z3), offering slightly better quality on some problems and slightly worse quality on others. In addition to determining whether formulas are satisfiable, the authors propose and evaluate a method for finding satisfying assignments. They also evaluate their system on SMT benchmarks, where it also shows 10x-100x speed-ups, albeit with somewhat lower accuracy (e.g., 73% - 92% accuracy; I couldn't find baselines for these experiments).
[Key Comments] Unless I'm missing something major, I'd prefer to accept this paper, since the problem appears novel and the experimental results seem very promising for a first paper on a new problem.
[Pro 1] The paper seems polished and well-written. I generally found it well-motivated and easy to follow.
[Pro 2] To the best of my knowledge, the problem domain (machine learning for satisfiability problems that are so large that they are difficult to solve using conventional methods) is both novel and well-motivated.
[Pro 3] Algorithms seem conceptually straightforward (but might be a bit challenging to implement in practice due to the large input size), and yield excellent results. The magnitude of speed-ups reported in the paper (10x - 100x) is large enough to be exciting from a research perspective, and also seems like it should be large enough to have significant practical applications.
[Pro 4] Results are evaluated on a variety of different boolean satisfiability and SMT problems.
[Con 1] To improve reproducibility, it would be helpful if the authors could provide more details about their model training setup. Figure 2 is a good start, but adding details about the layer sizes, types of pooling layers used, and the model training setup would help clarify the experiments.
[Con 2] It seems like a significant number of labeled training examples (i.e., examples that are already known to be satisfiable or unsatisfiable) are needed in order to train a neural network. This seems like it could present a bootstrapping problem for certain domains: it may be computationally expensive to generate ground-truth labels for training examples, but a significant number of labels are needed to train a good prediction model. I'd be very interested to see a study of how well trained models transfer across domains: how well do models trained on one domain (e.g., a synthetic problem where labeled training data is cheap to generate) transfer to a different domain (e.g., a real-world problem where training labels are expensive to compute)? However, this is a minor point for a first paper on a new problem, and I think the paper is interesting enough to merit acceptance without such an analysis. |
ICLR | Title
CNNSAT: Fast, Accurate Boolean Satisfiability using Convolutional Neural Networks
Abstract
Boolean satisfiability (SAT) is one of the most well-known NP-complete problems and has been extensively studied. State-of-the-art solvers exist and have found a wide range of applications. However, they still do not scale well to formulas with hundreds of variables for uniform 3-SAT problems. To tackle this fundamental scalability challenge, we introduce CNNSAT, a fast and accurate statistical decision procedure for SAT based on convolutional neural networks. CNNSAT’s effectiveness is due to a precise and compact representation of Boolean formulas. On both real and synthetic formulas, CNNSAT is highly accurate and orders of magnitude faster than the state-of-the-art solver Z3. We also describe how to extend CNNSAT to predict satisfying assignments when it predicts a formula to be satisfiable.
N/A
Boolean satisfiability (SAT) is one of the most well-known NP-complete problems and has been extensively studied. State-of-the-art solvers exist and have found a wide range of applications. However, they still do not scale well to formulas with hundreds of variables for uniform 3-SAT problems. To tackle this fundamental scalability challenge, we introduce CNNSAT, a fast and accurate statistical decision procedure for SAT based on convolutional neural networks. CNNSAT’s effectiveness is due to a precise and compact representation of Boolean formulas. On both real and synthetic formulas, CNNSAT is highly accurate and orders of magnitude faster than the state-of-the-art solver Z3. We also describe how to extend CNNSAT to predict satisfying assignments when it predicts a formula to be satisfiable.
1 INTRODUCTION
The Boolean satisfiability problem, or SAT, is a classical decision problem. Given a propositional formula φ, SAT needs to decide whether φ has a satisfying assignment to its variables. If the answer is yes, we say that the formula φ is satisfiable, or SAT for short. Otherwise, it is unsatisfiable, or UNSAT for short. For example, the formula (x1∨x2)∧(¬x1∨x2)∧(x1∨¬x2) is satisfiable when both x1 and x2 are true (i.e., x1 = x2 = true). Conversely, (x1 ∨x2)∧ (¬x1 ∨x2)∧ (x1 ∨¬x2)∧ (¬x1 ∨¬x2) cannot be satisfied by any of the possible assignments.
In other words, SAT asks whether the variables of the given Boolean formula φ can be consistently assigned the values true or false such that the formula evaluates to true. If this is the case, the formula is called satisfiable. On the other hand, if no such assignment exists, the formula is false for all possible variable assignments and is unsatisfiable.
SAT is a classical NP-complete problem, and in fact was the first problem proved NP-complete. Many hard problems naturally reduce to SAT, such as the traveling salesman problem (TSP) and clique detection. SAT has been extensively studied in the literature for decades because it is a foundational problem and has wide applications. The more general satisfiability modulo theories (SMT) can also be reduced to SAT solving.
SAT is one of the most investigated problems, and numerous heuristics exist to help speed up SAT solving. However, state-of-the-art solvers do not yet scale to large, difficult formulas, such as ones with hundreds of variables and thousands of clauses for uniform 3-SAT problems (Xu et al., 2012). This is because the search space for solutions increases exponentially w.r.t. the number of variables. Most search-based SAT solvers are based on the DPLL approach (Davis et al., 1962), but the search space, even reduced, is still intractable for very large formulas.
State-of-the-art methods for SAT adopt the Conflict-Driven Clause Learning (CDCL) (Silva & Sakallah, 1996; Bayardo Jr & Schrag, 1997) algorithm. This is a systematic search algorithm but employs various optimizations to improve efficiency. However, because the general problem is NP-complete, systematic search algorithms have exponential worst-case complexity, which limits the scalability of these methods.
There exists several pieces of previous work that try to use machine/deep learning methods to improve SAT solvers (Loreggia et al., 2016; Fei & Rompf, 2018), classify SAT/UNSAT (Bünz & Lamm, 2017; Grozea & Popescu, 2014; Devlin & O’Sullivan, 2008), or directly solve SAT instances (Selsam et al., 2018). However, all these approaches focus on SAT problems with a small number of variables.
In this paper, we introduce CNNSAT, a fast and accurate technique based on Convolutional Neural Networks (Krizhevsky et al., 2012) to predict both satisfiability and satisfying assignments. Evaluated on a data set containing 3-SAT problems with up to 410 variables, CNNSAT is able to predict SAT/UNSAT with more than 95% accuracy and orders of magnitude faster than state-of-the-art solvers. We also introduce optimizations to further improve CNNSAT’s scalability. As for the more general SMT problems, CNNSAT is able to predict their satisfiability with more than 73% accuracy.
2 PRELIMINARIES
Convolutional neural network (CNN). CNN is a class of deep, feed-forward artificial neural networks. Many successful applications of CNNs exist in image and sound processing. A CNN has an input and an output layer, as well as multiple hidden layers, which typically consist of convolutional layers, pooling layers, fully connected layers and normalization layers.
The SAT and 3-SAT problems. In Boolean logic, a formula is in conjunctive normal form (CNF) if it is a conjunction of one or more sub-formulas. For a CNF formula, each of its sub-formulas is called a clause, which is a disjunction of literals (i.e., variables or their negations). The clause-to-variable ratio of a CNF formula is defined as the ratio of the number of clauses over the number of variables.
Each SAT instance can be represented in CNF. A CNF formula has a satisfying assignment iff there exists at least one assignment for each variable in the formula such that the formula evaluates to true. The objective of SAT solving is to determine whether or not a given formula is satisfiable, and produce a satisfying assignment when the formula is satisfiable.
3-SAT is a special case of SAT where the number of literals in each clause is up to three. Generalizing 3-SAT, N -SAT requires all clauses having no more than N literals, and uniform N -SAT requires all clauses having exactly N literals. 3-SAT is also NP-complete, and, in general, N -SAT, for N > 2, can be reduced to 3-SAT.
The SMT problem Satisfiability Modulo Theories (SMT) refers to the problem of determining whether a first-order formula is satisfiable w.r.t. some logical theories. It is typically applied to the theory of real numbers, the theory of integers, and the theories of various data structures, such as lists, arrays, and bit vectors.
For brevity, hereafter if a SAT problem instance is in CNF, we refer to it as CNF. Otherwise, we still use SAT to refer to the more general case.
3 CNNSAT
This section presents the technical details behind our approach. In particular, it describes the representation that we introduce to encode CNF formulas, the architecture of our proposed neural network, and the method that we use to find satisfying assignments.
3.1 REPRESENTATION
A SAT problem has a simple syntactic structure and therefore can be encoded into a syntax-based representation such as an abstract syntax tree (AST). The semantics of propositional logic induces rich invariance that such syntactic representations would ignore, e.g., permutation and negation invariance (Selsam et al., 2018). Permutation invariance stipulates that the satisfiability of a SAT problem is not affected by swapping the variables (e.g., swapping all occurrences of x1 with those of x2 in the SAT instance). Negation invariance means that negating every literal corresponding to a given variable (e.g., replacing xi by ¬xi, and ¬xi by xi for any variable xi in the SAT instance). As noted by Selsam et al. (2018), syntax-based representations do not capture the semantics of SAT problems. In other words, they cannot identify even the simplest semantic equivalence among SAT problems, such as permutation and negation invariance discussed earlier. On the other hand, even though syntax-based representations may not accurately capture semantic equivalence, sufficient amount of training data may allow neural networks to learn and predict the semantics of SAT formulas. Our evaluation in Section 5 confirms this hypothesis. In addition, for certain applications, most CNFs do not share the same/similar semantics. Therefore, we adopt a syntax-based representation to balance accuracy and scalability.
SAT Representation. Although a SAT problem can be represented in different forms, we choose the most common CNF format. Each clause in a CNF formula φ is represented by a vector v, where v = 〈e1, e2, ..., en〉, and the dimension of v, n, corresponds to the number of variables in φ. For each element ei in the vector, we set it to 0 if the corresponding variable xi does not occur in the clause, -1 if the variable xi is negated, and 1 otherwise (i.e., when the literal xi appears in the clause). Collectively, the vectors for φ’s clauses form an m× n matrix, where m is the number of clauses and n the number of variables.
Figure 1 shows an example to illustrate this representation. The CNF formula is shown in the left sub-figure, while the representation is shown in the right sub-figure. The first line in the left sub-figure (p CNF 5 6) indicates that the CNF has 5 variables and 6 clauses. The other rows in the left sub-figure is in the format 〈vi1 vi2 vi3〉, where i is the i-th clause, vij is a literal (i.e., j-th variable or its negation) in the clause — a negative value indicates that the variable is negated. The actual CNF formula is
(x1∨x2∨x3)∧(x2∨¬x3∨x4)∧(x1∨¬x2∨¬x3)∧(x1∨x2∨x4)∧(x3∨¬x4∨¬x5)∧(¬x3∨x4∨x5)
From the table, we can see that the representation in the right sub-figure encodes all the values of the variables into corresponding values in the matrix.
This representation is straightforward and the conversion is efficient. Note that this is a sparse matrix because only a small number of elements in each row are nonzero. However, we observe that, in practice, a SAT problem may have as many as millions of variables and clauses. At such a large scale, these SAT problems cannot fit in memory. Therefore, we propose a compact representation to improve scalability.
Our core idea is to split a matrix into smaller sub-matrices and summarize information for each sub-matrix. First, we define a fixed size sliding window. Then, we split the original matrix into sub-matrices according to the size of the original matrix and the sliding window. For each submatrix, ri = 〈pi, ni〉 is a compact representation for the i-th sub-matrix, where pi is the number of positive values in the sub-matrix and ni the number of negative values. Therefore, each sub-matrix is converted to a list with 2 elements. It is worth noting that when the size of the sliding window is 1 × 1, it retains the exact information in the original matrix. In the next section, we introduce additional optimizations for the compact matrix for better performance. Our experimental evaluation shows that this representation can accurately capture semantic equivalence.
SMT Representation. There are several straightforward representations for SAT problems. In contrast, representing SMT problems is more challenging. Although we can design custom representations for SMT, we choose to translate SMT problems to SAT problems so that we can leverage our representation of SAT problems to also encode SMT problems.
3.2 NETWORK ARCHITECTURE
Figure 2 depicts the architecture of our proposed neural network which uses three convolution layers for CNN. The first layer of our network aims at reducing the scale of the input matrix because this matrix can still be too large to fit in memory even for the compact representation. The last two layers are used for building neural networks.
For convolution layers whose stride is one, the size of the output after one layer is sightly smaller than the size of the input. The output size depends on the kernel size. Therefore, the scalability of this model is poor if the size of the input is large. In order to tackle this challenge, we add the first
Figure 2: Network Architecture
Conv Layer with NxM kerne l size R E L U
C N F
Matrix 2x2 Max poo ling
Conv Layer with 5x5 kerne l size R E L U 2x2 Max poo ling
Conv Layer with 3x3 kerne l size R E L U
F C
Result 2x2 Max poo ling
N and M
Algorithm 1: Solving_CNF Input: φ, N Output: Result
1 res := predictCNF(φ); 2 if res = UNSAT then 3 return UNSAT;
4 assignment := []; 5 predTimes := 0; 6 index := 0; 7 predLists := new map(); 8 while index < NumberOfVar(φ) do 9 assign := random([true, false]);
10 newCNF := assignVar(φ, assign, index); 11 // res is a structure with 〈label, probability〉 12 res := predictCNF(newCNF); 13 predLists.insert(newCNF, res);
14 newCNF := chooseTopNProb(φ, predLists, N); 15 assignment := solver.solve(newCNF); 16 if assignment = SAT then 17 return contructAssign(assignment, predLists, N);
18 return UNKNOWN;
layer, whose goal is to shrink each input matrix into a fixed size matrix by choosing a specific stride and kernel size. At the high-level, we first split an input matrix into a fixed number of sub-matrices (e.g., 100× 100). N and M are determined by the input matrix. Then, we extract the features of each sub-matrix and use them to form a new matrix. In this way, we are able to process matrices of any size, and the only requirement is that the input matrix should pass the first layer.
After the first convolution layer, the size of the matrix is fixed (e.g., 100 × 100). We then build three pooling layers and two other convolution layers. The last layer is a fully-connected layer that computes the scores.
3.3 SAT SOLVING
In order to solve a CNF formula instead of only predicting whether it is SAT or UNSAT, we simplify the CNF formula by guessing a satisfying assignment. We predict an assignment as follows. First, we construct new CNF formulas by assigning random values (i.e., true or false) to variables, and thus construct new matrices. We then feed these new matrices to the trained model and analyze the prediction results. We choose a specific number of assignments based on prediction probabilities (i.e., confidence). Next, we use an off-the-shelf solver to find assignments for the rest of the variables. Finally, we combine the two types of assignments to construct a final assignment.
Algorithm 1 shows the steps we use for solving CNF formuals. The input is a formula instead of a compressed matrix, which limits the scalability of satisfiability solving. First, we do not solve
formulas that our CNN model predicts to be UNSAT (Lines 1-3). We assign random values (true or false) to the variables and use our model to predict them (Lines 10-12). Note that we assign variables one by one based on the order of variables (Line 8). Then, we store the result and the new CNF (Line 13). After obtaining prediction results for all variables, we select a specific number (i.e., N) of predicted variables ranked by probability (Line 14). Reducing the original CNF formula with these partial assignments yields a new, simplified CNF formula, which is fed to an existing solver (Line 15). At the end, we merge the predicted partial assignment with the solver result to construct an assignment if the solver finds a satisfiable assignment (Lines 16-17). Otherwise we regard the formula as UNKNOWN (Line 18).
Consider, for example, an input CNF formula (x1∨x2)∧ (¬x1∨x2)∧ (x1∨¬x2). First, assume that we assign false to x1, which leads to the new, simplified CNF formula: (x2) ∧ (¬x2). We feed this formula to our model, and let us assume that it predicts the formula to be SAT with 80% probability. Next, we try x2 as true, the CNF formula simplifies to (x1). If the prediction is SAT with 90% probability and the N is 1, then we assign true to x2 and use a solver to resolve (x1). The solver returns the satisfying assignment that x1 = true. With these two pieces of variable assignment information, we derive the satisfying assignment {x1 = true, x2 = true} for the original CNF formula. Note that if N were chosen to be 2, the combined variable assignment is not a satisfying assignment. We choose to determine N dynamically based on the dataset.
4 DATASETS
We use CNFgen (Lauria et al., 2017) to generate CNF formulas in the DIMACS format. It generates combinatorial, challenging problems for SAT solvers. CNFgen is also able to generate different problems. For this work, we restricted CNFgen to generate random 3-SAT instances whose number of variables and number of clauses are configurable.
We generate two kinds of datasets, Long Range and Separated. The number of variables for Long Range ranges from 10 to 410 and the clause-variable ratio ranges from 4 to 8. It takes longer time for solvers to solve CNFs with more than 400 variables and 8 clause-variable ratio. We generate 16, 000 random CNFs.
The second dataset Separated is used to test the ability of CNNSAT when predicting CNFs with three smaller datasets. The data set consists of three sub-datasets: (1) a small dataset whose number of variables ranges from 12 to 30, (2) a medium dataset whose number of variables ranges from 130 to 160, and (3) a large dataset whose number of variables is between 300 and 330. The clause-variable ratio still ranges from 4 to 8. There are 95, 000 CNF formulas in this dataset.
We use 75% of the whole dataset for training and the rest of them for testing. A dataset should contain a relatively balanced distribution of satisfiable and unsatisfiable instances, and cannot be made from instances that are all in the same class. The ratio of SAT to UNSAT is 9637:6357 in Long Range and the ratio of SAT to UNSAT is 5604:3896 in Separated.
Figure 3a depicts the number of clauses in the different datasets. Figure 3b shows the distribution of the number of variables in the different datasets. Long Range is a dataset that is unbiased w.r.t. the number of variables, but Separated is not. The goal of the Separated dataset is to compare the behavior of networks with balanced and unbalanced datasets.
Figures 4a and 4b show the distribution of SAT and UNSAT instances in the different datasets. The number of SAT and UNSAT instances in these datasets is nearly evenly distributed across different ranges of variables. Note that the number of variables is not evenly distributed (Figure 4b) because we would also like to evaluate the performance of CNNSAT when the dataset is not evenly distributed by the number of variables.
Finally, we construct our SMT dataset from the SMT benchmarks provided by SMT (2018). We choose two theories: QF_BV and QF_IDL. As for predicting satisfiability for SMT problems, we use Z3 (De Moura & Bjørner, 2008) to convert them to SAT problems and use our model to predict satisfiability for these SAT problems.
5 EVALUATION
All our experiments run on a PC with the following hardware configuration: Intel(R) Core(TM) i7-7700 CPU @ 3.60GHz, 16GB memory and the GPU is GeForce 730 with 2GB memory. We have implemented CNNSAT based on TensorFlow with GPU support.
As discussed earlier, we use CNFgen (Lauria et al., 2017) to generate random 3-SAT problem instances in the DIMACS format. We use Z3 (De Moura & Bjørner, 2008) to convert SMT problems to SAT problems. PicoSAT (Biere, 2007) is used to help predict assignments for CNF formulas. We discard all SAT problems that cannot be solved by PicoSAT within a 10-minute budget. For each dataset, 75% of the data is used for training and the rest for testing.
5.1 PREDICTION RESULTS ON RANDOM 3-SAT PROBLEMS
Table 1 shows the summary results of our neural network on different datasets. We evaluated CNNSAT’s accuracy over the datasets with the 25% holdout setting, i.e., we trained our models on 75% of the data and tested on the remaining 25% data. We performed all experiments three times and computed the average performance over these three runs.
Table 1 shows CNNSAT’s accuracy on two datasets. The overall accuracy on the Long Range 3-SAT instances is 98.1%. The accuracy for SAT on SAT instances is 99.0%, and the accuracy for UNSAT on UNSAT instances is 97.0%. The accuracy for predicting satisfying assignments is 92.6%. The
overall accuracy for the Separated 3-SAT instances is 96.4%. The accuracy for SAT on the SAT instances is 97.7%, while the accuracy for UNSAT on the UNSAT instances is 94.3%. CNNSAT’s accuracy for predicting satisfying assignments is 91.4%.
As for the scalability of CNNSAT, we evaluated it from three aspects. First, we measure the time spent on predicting the satisfiability of CNF formulas. We use Z3, PicoSAT, MiniSAT (Sorensson & Een, 2005), Glucose (Audemard & Simon, 2009), Dimetheus (Gableske, 2013) and CaDiCaL (Biere, 2017) for comparison to evaluate CNNSAT’s efficiency. Due to space constraints, we only show the results for the two best performing solvers, MiniSAT and PicoSAT. The “Pred” means the time used when making predictions on the test data. Note that 1/4 of the CNF formulas were used for testing. “MiniSAT” and “PicoSAT” show the time that MiniSAT and PicoSAT spent on solving all the CNF formulas, respectively. The results show that CNNSAT clearly outperforms MiniSAT and “PicoSAT” by 1-2 orders of magnitude, making it practical for real-world use. “% of Imp on assign” denotes the percentage of improvement for our SAT solving algorithm compared to directly solving CNFs predicted as satisfiable using PicoSAT. We can observe that predicting speed for Long Range is improved when using our method. However, the performance for dataset Separated is decreased. The reason is that Separated contains less complicated CNFs and thus there is little improvement when CNNSAT could predict values for a part of the variables. In contrast, CNNSAT introduces additional overhead by predicting potential assignments.
5.2 EQUIVALENCE RESULTS
In this experiment, we evaluate two kinds of semantic equivalent operations, permutation invariance and negation invariance. For negation invariance, we generate datasets by negating half the variables. As for permutation invariance, we randomly choose two variables and swap them. For each CNF instance, we swap variables bN/2c times, whereN is the number of variables. For the two operations, we evaluate them three times and average the results.
Table 2 shows the results. We can see that CNNSAT predicts SAT/UNSAT with high accuracy. The corresponding accuracy is close to the original dataset in Table 1. The % of difference shows the percent of differences in individual predictions. The evaluation results show that CNNSAT is able to capture the semantics of SAT problems.
5.3 ACCURACY ON SMT BENCHMARKS
Table 3 shows the accuracy of CNNSAT on SMT benchmarks. The timeout for each phase is also 10 minutes. “CNV time” stands for how much time it takes to convert SMT problems to SAT problems. In our experiment, Z3 may convert an SMT to an empty SAT whose number of variable is zero or one. We ignore these trivial SAT instances.
We can see from the table that CNNSAT is able to predict them with more than 73% accuracy. In addition, CNNSAT is 1-2 orders faster than Z3.
5.4 DISCUSSIONS
Sparse Convolutional Neural Network. We use traditional CNN for CNNSAT, and construct a matrix based on CNF. However, it is clear that the matrix is sparse. In fact, for 3-SAT problems, the matrices are very sparse and most elements in these matrices are zero. However, we have not found sparse CNNs that best fit our scenario. Graham & van der Maaten (2017) present the Submanifold Sparse Convolutional Networks but since the matrices in our setting is not submanifold, it does not fit our representation.
Guiding SAT solvers. Most state-of-the-art SAT solvers implement Conflict-Driven Clause Learning (CDCL) (Silva & Sakallah, 1996; Bayardo Jr & Schrag, 1997). In CDCL, it continues selecting a variable and assigning true or false, and try to find conflict until all variable values are assigned. CNNSAT may improve its performance by trying to assign a variable the value leading the formula to SAT. Although the performance is not improved when a formula is UNSAT, it may improve performance when a formula is SAT. The performance can also be improved by learning the strategy that guiding the selection to choose a conflicting assignment.
6 RELATED WORK
Bello et al. (2017) present a framework to tackle combinatorial optimization problems using neural networks and reinforcement learning. They also apply it to other NP-hard problems such as traveling salesman problem and KnapSack. It shows performance improvement compared to standard algorithmic methods.
Fei & Rompf (2018) propose another avenue for SAT. They cast symbolic reasoning problems directly as gameplay to leverage the full decision-making power of neural networks through deep reinforcement learning. Most SAT solvers are based on the Conflict Driven Clause Learning (CDCL) algorithm, which is a typical symbolic reasoning process that can be cast as a game of controlling the branching decisions. The results show that this method can obtain better performance.
Xu et al. (2012) show that 70% classification accuracy can be obtained based on phase transition features on uniform-random 3-SAT formulas. CNNSAT’s prediction accuracy is significantly higher under a similar experimental setup. In addition, phase transition features vary on different kinds of formulas, and thus a significant performance drop is expected on SAT instances converted from SMT formulas.
NeuroSAT (Selsam et al., 2018) uses an undirected graph to represent CNFs and builds a model by two vectors, three multilayer perceptrons and two layer-norm LSTMs. However, it needs to generate certain type of pairs to model SAT. In each pair, one element is satisfiable, the other is unsatisfiable, and the two differ by negating only a single literal occurrence in a single clause. Therefore, the training data is constrained by this requirement, which means for some data like uniform 3-SAT, it takes significant amount of time to generate the training data. In contrast, for CNNSAT, any training data is useful. NeuroSAT is unable to precisely predict satisfiability when the number of variables is large. Bünz & Lamm (2017) propose a method based on Graph Neural Network that is able to classify SATs with around 60% validation error. The representation is similar to NeuroSAT, which uses graphs to represent CNFs.
Feature-based machine learning methods Devlin & O’Sullivan (2008); Grozea & Popescu (2014) can also classify SATs. Grozea & Popescu (2014) aim to empirically test the ability of machine learning models to act as decision oracles for NP problems. They only evaluated the idea on formulas with up to 100 variables. The approach does not scale to formulas with more variables, such as those large formulas considered in this paper. Devlin & O’Sullivan (2008) view the satisfiability problem as a classification task. Based on easy to compute structural features of instances of large satisfiability
problems, they use a variety of standard classifier learners to classify previously unseen instances of the satisfiability problem as either SAT or UNSAT. The accuracy for classification is more than 90%. In comparison, CNNSAT can predict variable assignments and handle much larger formulas.
7 CONCLUSION
In this paper, we have introduced a new fast and accurate approach for solving SAT problems via Convolutional Neural Networks. We have described how we represent SAT instances, how we design our proposed neural network, how we optimize our technique for scalability, and our extensive evaluation to show CNNSAT’s high accuracy and scalability on large SAT and SMT problem instances. Because CNNSAT’s effectiveness, it may find interesting applications in domains that require fast SAT and SMT solving, such as software analysis and verification, symbolic execution, planning and scheduling, and combinatorial design. | 1. What is the main contribution of the paper regarding solving SAT instances using a CNN architecture?
2. How does the proposed approach compare with modern SAT solvers, particularly in terms of scalability and performance on random k-SAT instances?
3. How can the output of the CNN architecture be related to prediction probabilities of assignments, as mentioned in Algorithm 1?
4. Can you provide more details or examples to explain how the model predicts a partial assignment that is fed to an existing solver for deriving the satisfiability result?
5. How do modern SAT solvers such as MiniSAT, Glucose, March, or Dimetheus handle pseudo-industrial problems, and how does the CNNSAT architecture compare with these solvers in this regard?
6. How could the authors incorporate structures (such as community attachments or popularity-similarities) in SAT instances to estimate whether CNNSAT could handle pseudo-industrial problems? | Review | Review
The aim of this paper is to solve SAT instances using a CNN architecture. SAT instances are represented using an efficient encoding of boolean matrices. The overall idea is to decompose an input SAT instance into simpler ones, and to train the neural model on simpler instances using an existing solver for labeling these instances. Based on satisfaction probabilities induced from simpler formulas, the architecture predicts a partial assignment which is fed to the existing solver for deriving the satisfiability result.
Arguably, the topic of “learning to solve SAT instances” is very interesting, by coupling results from neural networks and SAT solvers. This work is inspired from the landmark paper on NeuroSAT, and the experimental results look promising.
However, since the framework is focused on solving random SAT problems (especially random 3-SAT instances), the paper is missing a detailed description of this active research topic in AI and the SAT community (see e.g. [1,2]). Notably, the problem of generating realistic random k-SAT instances has long been considered as one of the most important challenges in SAT research [3]. Importantly, modern random k-SAT instances are not only characterized by their number of variables, and their ratio #clauses / #variables, but with an additional “structure” which mimics real-world, industrial instances (see e.g. [4]).
Furthermore, I had some trouble understanding how a SAT instance is solved using algorithm 1. Specifically the text in Section 3.3 that explains Algorithm 1 is a bit confusing. How do “we choose a specific number of assignments based on prediction probabilities”? Unless I missed something, the output of the CNN architecture is a probability value that the input formula is SAT, so I don’t really see how this can be related to prediction probabilities of assignments. This should be explained in detail since Line 15 is the main output of the algorithm, which is fed (Line 16) to an existing solver for completing the assignment. The example at the end of section 3.3 is not very helpful: namely, the CNF formula $(x_2) \land (\neg x_2)$ is clearly unsatisfiable, so how can the model predict that it is satisfiable with 80% probability? And, if we try here $x_2 = 1$, we immediately get $\bot$ (the unsat CNF), but not $x_1$ (which was already assigned to $0$).
Finally, the CNN architecture should be compared with modern SAT solvers which have been participating to SAT competitions. The Z3 solver is mainly focused on solving SMT instances [5], not random k-SAT instances which, by the way, is a common track in annual SAT competitions (see e.g. [6]). To this point, generic SAT solvers such as MiniSAT [7] and Glucose [8] are able to solve in few seconds some random 3-SAT instances with thousands of variables and tens of thousands of clauses (see e.g. [4]). So, the motivating assertion “[...] state-of-the-art solvers do not yet scale to large, difficult formulas, such as ones with hundreds of variables and thousands of clauses” in the introduction of the paper, is not totally correct. To sum up, I would recommend to compare the CNNSAT architecture with well-known SAT solvers such as MinSAT, Glucose, March, or Dimetheus [9] which has been one of the strongest solvers in recent years for tackling random instances. Also, as mentioned above, it would be interesting to incorporate some structures (such as, for example, community attachments or popularity-similarities) in SAT instances, in order to estimate whether CNNSAT could handle pseudo-industrial problems.
[1] D. Mitchell, B. Selman, H. Levesque, Hard and easy distributions of SAT problems, in: Proceedings of the 10th National Conference on Artificial Intelligence, AAAI’92, 1992, pp. 459–465.
[2] Nudelman, E., Leyton-Brown, K., Hoos, H. H., Devkar, A., & Shoham, Y. Understanding random SAT: Beyond the clauses-to-variables ratio. In 10th International Conference on Principles and Practice of Constraint Programming (CP’04), pp. 438–452.
[3] B. Selman, H.A. Kautz, D.A. McAllester, Ten challenges in propositional reasoning and search, in: Proceedings of the 15th International Joint Conference on Artificial Intelligence, IJCAI’97, 1997, pp. 50–54.
[4] J. Giráldez-Cru and J. Levy. Generating sat instances with community structure. Artificial Intelligence, 238:119 – 134, 2016.
[5] The 2014 SMT Competition https://satassociation.org/jsat/index.php/jsat/article/download/122/114
[6] The 2018 SAT Competition
http://sat2018.forsyte.tuwien.ac.at/index.php?cat=results
[7] N. Eén, N. Sörensson, An extensible SAT-solver, in: Proceedings of the 6th International Conference on Theory and Applications of Satisfiability Testing, SAT’03, 2003, pp. 502–518.
[8] ] G. Audemard, L. Simon, Predicting learnt clauses quality in modern SAT solvers, in: Proceedings of the 21st International Joint Conference on Artificial Intelligence, IJCAI’09, 2009, pp. 399–404
[9] Dimetheus
https://www.gableske.net/dimetheus |
ICLR | Title
Prospect Pruning: Finding Trainable Weights at Initialization using Meta-Gradients
Abstract
Pruning neural networks at initialization would enable us to find sparse models that retain the accuracy of the original network while consuming fewer computational resources for training and inference. However, current methods are insufficient to enable this optimization and lead to a large degradation in model performance. In this paper, we identify a fundamental limitation in the formulation of current methods, namely that their saliency criteria look at a single step at the start of training without taking into account the trainability of the network. While pruning iteratively and gradually has been shown to improve pruning performance, explicit consideration of the training stage that will immediately follow pruning has so far been absent from the computation of the saliency criterion. To overcome the short-sightedness of existing methods, we propose Prospect Pruning (ProsPr), which uses meta-gradients through the first few steps of optimization to determine which weights to prune. ProsPr combines an estimate of the higherorder effects of pruning on the loss and the optimization trajectory to identify the trainable sub-network. Our method achieves state-of-the-art pruning performance on a variety of vision classification tasks, with less data and in a single shot compared to existing pruning-at-initialization methods. Our code is available online at https://github.com/mil-ad/prospr.
1 INTRODUCTION
Pruning at initialization—where we remove weights from a model before training begins—is a recent and promising area of research that enables us to enjoy the benefits of pruning at training time, and which may aid our understanding of training deep neural networks.
Frankle & Carbin (2019) provide empirical evidence for the existence of sparse sub-networks that can be trained from initialization and achieve accuracies comparable to the original network. These “winning tickets” were originally found in an iterative process where, in each iteration, the network is trained to full convergence followed by pruning a subset of the weights by magnitude. The values of the remaining weights are then rewound to their value at initialization, and the process is repeated iteratively until the desired sparsity level is achieved.
This process, known as Lottery Ticket Rewinding (LTR), is very compute-intensive and is prone to failures. For instance, Frankle et al. (2020) show better results by rewinding weights not all the way back to initialization, but to early stages of training instead. LTR is especially prone to failure for more difficult problems (e.g., training on ImageNet), where we must rewind weights to their state several epochs into training.
A recent line of work proposes alternative practical solutions to identify these sub-networks before training begins, without the cost of retraining the network iteratively Lee et al. (2018); Wang et al. (2020); de Jorge et al. (2021); Tanaka et al. (2020). This class of methods uses gradients to assess
∗Corresponding author. Contact at milad.alizadeh@cs.ox.ac.uk
the importance of neural network weights. These gradients are often known as Synaptic Saliencies and are used to estimate the effect of pruning a single parameter in isolation on various objectives, typically the loss function. This objective is not so different from classical pruning-at-convergence methods, but the gradients for a well-trained model are small; therefore these methods must inspect higher-order metrics such as the Hessian to estimate the pruning effect (LeCun et al., 1990; Hassibi & Stork, 1993). Pruning at initialization is desirable because the benefits of pruning (in terms of memory and speed) can be reaped during training, rather than only at inference/deployment time.
However, the performance of prune-at-init methods remains poor: the degradation in accuracy is still significant compared to training the full model and LTR, making these methods impractical for many real-world problems (Frankle et al., 2021). In this paper, we identify a fundamental limitation in the objective formulation of current methods, namely that saliency criteria do not take into account the fact that the model is going to be trained after the pruning step. If our aim was to simply prune a subset of weights without affecting the loss, then these saliency criteria are estimating the correct objective. However, this estimate does not take into account that we are going to train the weights after we prune them. We need a metric that captures the trainability of the weights during the optimization steps, rather than a single myopic estimate.
Many methods attempt to overcome this by pruning gradually and/or adding training steps between iterative pruning steps (Zhu & Gupta, 2018; You et al., 2020; de Jorge et al., 2021). Although this approach has been shown to be effective, it is expensive and cumbersome in practice and ultimately is an indirect approximation to the trainability criteria we are looking to incorporate into our objective.
In this paper, we propose Prospect Pruning (ProsPr), a new pruning-at-init method that learns from the first few steps of optimization which parameters to prune. We explicitly formulate our saliency criteria to account for the fact that the network will be trained after pruning. More precisely, ProsPr uses meta-gradients by backpropagating through the first few model updates in order to estimate the effect the initial pruning parameters have on the loss after a few gradient descent steps. Effectively this enables us to account for both higher-order effects of pruning weights on the loss, as well as the trainability of individual weights. Similar to other methods we apply pruning to initialization values of weights and train our models from scratch. In summary, our contributions are:
• We identify a key limitation in prior saliency criteria for pruning neural networks—namely that they do not explicitly incorporate trainability-after-pruning into their criteria.
• We propose a new pruning-at-init method, ProsPr, that uses meta-gradients over the first few training steps to bridge the gap between pruning and training.
• We show empirically that ProsPr achieves higher accuracy compared to existing pruningat-init methods. Unlike other methods, our approach is single shot in the sense that the pruning is applied to the network initial weights in a single step.
2 BACKGROUND
In this section we review the key concepts that our method builds upon. We delay comparisons to other pruning techniques in the literature to Section 5.
Classic post-training pruning methods aim to identify and remove network weights with the least impact on the loss (LeCun et al., 1990; Hassibi & Stork, 1993). They typically use the Taylor expansion of the loss with respect to parameters to define a saliency score for each parameter: δL ≈ ∇θL⊤δθ + 12δθ
⊤H δθ, where H = ∇2θL is the Hessian matrix. When the network has converged, the first-order term in the expansion is negligible, and hence these methods resort to using H.
Lee et al. (2018) introduce SNIP, and show that the same objective of minimizing the change in loss can be used at initialization to obtain a trainable pruned network. At initialization, the first-order gradients ∇θ in the local quadratic approximation are still significant, so higher-order terms can be ignored. Hence the computation of the parameter saliencies can be done using backpropagation.
The Taylor expansion approximates the effect of small additive perturbations to the loss. To better approximate the effect of removing a weight, Lee et al. (2018) attach a multiplicative all-one mask to the computation graph of each weight. This does not change the forward-pass of the network, but it enables us to form the Taylor expansion around the mask values, rather than the weights, to estimate the effect of changing the mask values from 1 to 0. More specifically, SNIP computes the
saliency scores according to:
sj = |gj(w,D)|∑m k=1 |gk(w,D)| , (1)
with
gj(w,D) = ∂L(c⊙w,D)
∂cj , (2)
where m is the number of weights in the network, c ∈ {0, 1}m is the pruning mask (initialised to 1 above), D is the training dataset, w are the neural network weights, L is the loss function, and ⊙ is the Hadamard product. These saliency scores are computed before training the network, using one (or more) mini-batches from the training set. The global Top-K weights with the highest saliency scores are retained (cj = 1), and all other weights are pruned (cj = 0), before the network is trained.
Our method, to be introduced in Section 3, also relies on computing the saliency scores for each mask element, but uses a more sophisticated loss function to incorporate the notion of trainability.
3 OUR METHOD: PROSPR
In this section we introduce our method, Prospect Pruning (ProsPr). We note that for the problem of pruning at initialization, the pruning step is immediately followed by training. Therefore, pruning should take into account the trainability of a weight, instead of only its immediate impact on the loss before training. In other words, we want to be able to identify weights that are not only important at initialization, but which may be useful for reducing the loss during training. To this end, we propose to estimate the effect of pruning on the loss over several steps of gradient descent at the beginning of training, rather than the changes in loss at initialization.
More specifically, ProsPr models how training would happen by performing multiple (M) iterations of backpropagation and weight updates—like during normal training. We can then backpropagate through the entire computation graph, from the loss several steps into training, back to the original mask, since the gradient descent procedure is itself differentiable. Once the pruning mask is computed, we rewind the weights back to their values at initialization and train the pruned network. The gradient-of-gradients is called a meta-gradient. This algorithm is illustrated visually in Figure 1.
The higher-order information in the meta-gradient includes interactions between the weights during training. When pruning at initialization, our ultimate goal is to pick a pruned model, A, which is more trainable than an alternative pruned model B. That means we want the loss L(ŷA, y) to be lower than L(ŷB , y) at convergence (for a fixed pruning ratio). Finding the optimal pruning mask is generally infeasible since the training horizon is long (i.e., evaluation is costly) and the space of possible pruning masks is large. Unlike other methods that must compute the saliency scores iteratively, we can use the meta-gradients to compute the pruning mask in one shot. This picks a line in loss-space, which more closely predicts the eventual actual loss. This is because it smooths out over more steps, and takes into account interactions between weights in the training dynamics. Crucially, in the limit of large M, the match to the ultimate objective is exact.
3.1 SALIENCY SCORES VIA META-GRADIENTS
We now introduce ProsPr formally. After initialising the network weights randomly to obtain winit, we apply a weight mask to the initial weights,
w0 = c⊙winit. (3)
This weight mask contains only ones, c = 1, as in SNIP (Lee et al., 2018), and represents the connectivity of the corresponding weights.
We then sample M+1 batches of data Di ∼ Dtrain (i ∈ {0, . . . ,M}; M ≥ 1) for the pruning step, and perform M weight updates1,
w1 = w0 − α∇w0L(w0,D0) (4) ...
wM = wM−1 − α∇wM−1L(wM−1,DM−1). (5) Then, we compute a meta-gradient that backpropagates through these updates. Specifically, we compute the gradient of the final loss w.r.t. the initial mask,
∇c L(wM ,DM ). (6) Using the chain rule, we can write out the form of the meta-gradient beginning from the last step:
∇cL(wM ,D) = ∇wML(wM ,D)(∇cwM ), (7) repeating for each step until we reach the zero’th step whose gradient is trivial,
= ∇wML(wM ,D)(∇wM−1wM ) . . . (∇w0w1)(∇cw0) (8) = ∇wML(wM ,D)(∇wM−1wM ) . . . (∇w0w1)(∇c(c⊙winit)) (9)
= ∇wML(wM ,D)
[ M∏
m=1
(∇wm−1wm) ] winit. (10)
In practice, we can compute the meta-gradients by relying on automatic differentiation software such as PyTorch (Paszke et al., 2019). However, care must be taken to ensure that weights at each step are kept in memory so that the entire computation graph, including gradients, is visible to the automatic differentiation software. The saliency scores are now given by
sj = |gj(w,D)|∑m k=1 |gk(w,D)| , (11)
with
gj(w,D) = ∂L(wM ,D)
∂cj , (12)
where wM is a function of c. Equation (12) stands in contrast to SNIP, where the saliency is computed using the loss at c ·winit rather than wM . The saliency scores are then used to prune the initial weights winit: the ones with the highest saliency scores are retained (cj = 1), and all other weights are pruned (cj = 0). Finally, the network is trained with the pruned weights ŵinit.
Algorithm 1 summarises the proposed method, ProsPr. 1We formalise the weight updates using vanilla SGD here; in practice these may be different when using approaches such as momentum or BatchNorm (Ioffe & Szegedy, 2015). Since our implementation relies on automatic differentiation in PyTorch (Paszke et al., 2019), we can use any type of update, as long as it is differentiable w.r.t. the initial mask c.
Algorithm 1 ProsPr Pseudo-Code 1: Inputs: a training dataset Dtrain, number of initial training steps M , number of main training steps N
(M ≪ N ), learning rate α 2: Initialise: network weights winit
3: cinit = 1 ▷ Initialise mask with ones 4: w0 = cinit ⊙winit ▷ Apply mask to initial weights 5: for k = 0, . . . ,M − 1 do 6: Dk ∼ Dtrain ▷ Sample batch of data 7: wi+1 = wi − α∇wL(wi,Dk) ▷ Update network weights 8: end for
9: gj(w,D) = ∂L(wM ,D)/∂cj ▷ Compute meta-gradient
10: sj = |gj(w,D)|∑m
k=1 |gk(w,D)|
▷ Compute saliency scores
11: Determine the k-th largest element in s, sk.
12: cprune = { 1, if cj ≥ sk 0, otherwise
▷ Set pruning mask
13: ŵ0 = cprune ⊙winit ▷ Apply mask to initial weights winit
14: for i = 1, . . . , N do ▷ Train pruned model 15: ŵi+1 = ŵi − α∇wL(ŵi,D) 16: end for
3.2 FIRST-ORDER APPROXIMATION
Taking the meta-gradient through many model updates (Equation (6)) can be memory intensive: in the forward pass, all gradients of the individual update steps need to be retained in memory to then be able to backpropagate all the way to the initial mask. However, we only need to perform a few steps2 at the beginning of training so in practice we can perform the pruning step on CPU which usually has access to more memory compared to a GPU. We apply this approach in our own experiments, with overheads of around 30 seconds being observed for the pruning step.
Alternatively, when the number of training steps needs to be large we can use the following firstorder approximation. Using Equation (10), the meta-gradient is:
∇cL(wM ,DM ) = ∇wML(wM ,DM )
[ M∏
m=1
(∇wm−1wm) ] winit, (13)
writing wm in terms of wm−1 following SGD,
= ∇wML(wM ,DM )
[ M∏
m=1
∇wm−1(wm−1 − α∇wm−1L(wm−1;Dm)) ] winit,
(14) carrying through the partial derivative,
= ∇wML(wM ,DM )
[ M∏
m=1
I − α∇2wm−1L(wm−1;Dm) ] winit, (15)
and finally dropping small terms for sufficiently small learning rates,
≈ ∇wML(wM ,DM )
[ M∏
m=1
I ] winit, (16)
= ∇wML(wM ,DM ) winit. (17) In the second-to-last step, we drop the higher-order terms, which gives us a first-order approximation of the meta-gradient3.
2We use 3 steps for experiments on CIFAR-10, CIFAR-100 and TinyImageNet datasets 3Note that this approximation also works for optimisers other than vanilla SGD (e.g., Adam, Adamw, Ad-
abound), except that the term which is dropped (r.h.s. of Equation Equation (15)) looks slightly different.
With this approximation, we only need to save the initial weight vector winit in memory and multiply it with the final gradient. This approximation can be crude when the Laplacian terms are large, but with a sufficiently small learning rate it becomes precise. The approximation allows us to take many more intermediate gradient-steps which can be beneficial for performance when the training dataset has many classes, as we will see in Section 4.2.
4 EXPERIMENTS
We empirically evaluate the performance of our method, ProsPr, compared to various vision classification baselines across different architectures and datasets. In supplementary sections we show effectiveness of our method on image segmentation tasks (Appendix D) and when using selfsupervised initialization (Appendix E). We provide details of our hyper-parameters, experiment setup, and implementation details in Appendix A.
4.1 RESULTS ON CIFAR AND TINY-IMAGENET
In recent work, Frankle et al. (2021) extensively study and evaluate different pruning-at-initialization methods under various effects such as weight re-initialization, weight shuffling, and score inversion. They report the best achievable results by these methods and highlight the gap between their performance and two pruning-at-convergence methods, weight rewinding and magnitude pruning (Renda et al., 2020; Frankle et al., 2020).
In Figure 2 we evaluate ProsPr on this benchmark using ResNet-20 and VGG-16 on CIFAR-10, and ResNet-18 on Tiny-ImageNet. It can be seen that ProsPr reduces the performance gap, especially at higher sparsity levels, and in some cases exceeds the accuracy of pruning-after-convergence methods. Full results are also summarised in Appendix B.
This is a remarkable achievement: ProsPr is the first work to close the gap to methods that prune after training. Previous works that prune at the start have not been able to outperform methods that prune after training on any settings, including smaller datasets such as CIFAR-10 or Tiny-ImageNet. It is also important to note that other baselines that have comparable accuracies are all iterative methods. ProsPr is the only method that can do this in a single shot after using only 3 steps batch-sizes of 512
in the inner-loop before computing the meta-gradients. In total, we only use 4 batches of data. We also do not do any averaging of scores by repeating the method multiple times.
The performance in these small datasets comes from the fact that ProsPr computes higher-order gradients. While there are other iterative methods that can work without any data, their effect is mostly a more graceful degradation at extreme pruning ratios as opposed to best accuracy at more practical sparsity levels. One example is SynFlow which is similar to FORCE but uses an all-one input tensor instead of samples from the training set (Tanaka et al., 2020).
4.2 RESULTS ON IMAGENET DATASET
To evaluate the performance of ProsPr on more difficult tasks we run experiments on the larger ImageNet dataset. Extending gradient-based pruning methods to this dataset poses several challenges.
Number of classes In synaptic-saliency methods, the mini batches must have enough examples from all classes in the dataset. Wang et al. (2020) recommend using class-balanced mini-batches sized ten times the number of classes. In datasets with few classes this is not an issue and even a single batch includes multiple examples per class. This is one reason why methods like SNIP work with a single batch, and why we kept the number of steps in ProsPr’s inner loop fixed to only 3. ImageNet however has 1,000 classes, and using a single or a handful of small batches is inadequate. Previous methods such as FORCE, GraSP, or SynFlow avoid this problem by repeating the algorithm with new data batches and averaging the saliency scores. In ProsPr we instead increase the number of updates before computing the meta-gradients, ensuring they flow through enough data. Computing meta-gradients through many steps however poses new challenges.
Gradient degradation We start to see gradient stability issues when computing gradients over deep loops. Gradient degradation problems, i.e., vanishing and exploding gradients, have also been observed in other fields that use meta-gradients such as Meta-Learning. Many solutions have been proposed to stabilize gradients when the length of loop increases beyond 4 or 5 steps, although this remains an open area of research (Antoniou et al., 2019).
Computation Complexity For ImageNet we must make the inner loop hundreds of steps deep to achieve balanced data representation. In addition to stability issues, backpropagating through hundreds of steps is very compute intensive.
Therefore for our experiments on ImageNet we use the first-order approximation of ProsPr (Sec 3.2). We evaluate ProsPr using ResNet-50 and VGG-19 architectures and compare against state-of-the-art methods FORCE and Iter-SNIP introduced by de Jorge et al. (2021). We include multi-batch versions of SNIP and GraSP (SNIP-MB and GraSP-MB) to provide a fair comparison to iterative methods, which partially prune several times during training, in terms of the amount of data presented to the method. We use 1024 steps with a batch size of 256 (i.e. 262,144 samples) for ResNet-50. For VGG-19, a much larger model, and which requires more GPU memory we do 256 steps with batch size of 128. This is still far fewer samples than other methods. Force, for example, gradually prunes
in 60 steps, where each step involves computing and averaging scores over 40 batches of size 256, i.e. performing backpropagation 2400 times and showing 614,400 samples to the algorithm.
Table 1 shows our results compared to the baselines reported by de Jorge et al. (2021). First-order ProsPr exceeds previous results in all configurations except one, where it is outperformed by GraSP. Note the surprisingly good performance of random pruning of ResNets, which was also observed by de Jorge et al. (2021). This could be explained by the fact that VGG-19 is a much larger architecture with 143.6 million parameters, compared to 15.5 million in ResNet-50s. More specifically the final three dense layers of VGG-19 constitute 86% of its total prunable parameters. The convolution layers of VGG constitute only 14% of the prunable weights. Pruning methods are therefore able to keep more of the convolution weights and instead prune extensively from the over-parametrized dense layers. ResNet architectures on the hand have a single dense classifier at the end.
4.3 STRUCTURED PRUNING
We also evaluate ProsPr in the structured pruning setup where instead of pruning individual weights, entire channels (or columns of linear layers) are removed. This is a more restricted setup, however it offers memory savings and reduces the computational cost of training and inference.
Adopting ProsPr for structured pruning is as simple as changing the shape of the pruning mask c in Eq 3 to have one entry per channel (or column of the weight matrix). We evaluate our method against 3SP, a method that extends SNIP to structured pruning (van Amersfoort et al., 2020). Our results are summarized in Table 2 which show accuracy improvements in all scenarios. In Appendix C we also evaluate wall-clock improvements in training time as a result of structured pruning at initialization.
4.4 NUMBER OF META STEPS
Finally, we evaluate ProsPr when using a varying number of meta steps, which gives insight into whether using meta-gradients is beneficial. We repeated experiments from Section 4.3 but this time we vary the depth of training steps between 0 and 3. The results in Table 3 show that the final accuracy consistently increases as we increase the depth of the training, showing the effectiveness of meta-gradients. We used the same data batch in all M training steps to isolate the effect of M, while in other experiments we use a new batch in every step.
In theory increasing the number of training steps should always help and match the ultimate objective (estimating the loss after many epochs of training) in the limit. However, in practice increasing the number of steps beyond 3 poses a lot of gradient stability issues (and is computationally expensive). These issues have been also identified in the meta-learning literature (Antoniou et al., 2019).
5 RELATED WORK
Pruning at initialization Several works extend the approach proposed by Lee et al. (2018). de Jorge et al. (2021) evaluate SNIP objective in a loop in which pruned parameters still receive gradients and therefore have a chance to get un-pruned. The gradual pruning helps avoid the layercollapse issue, and their method, known as FORCE, achieves better performance at extreme sparsity levels. Tanaka et al. (2020) provide theoretical justification for why iteratively pruning helps with the layer-collapse issue and propose a data-free version of the method where an all-one input tensor is used instead of real training data. Wang et al. (2020) propose an alternative criterion to minimizing changes in the loss and instead argue for preserving the gradient flow. Their method, GraSP, keeps weights that contribute most to the norm of the gradients. van Amersfoort et al. (2020) extends SNIP and GraSP to structured pruning to make training and inference faster. They further augment the scores by their compute cost to push the pruning decision towards more FLOPS reduction.
Gradual pruning As discussed in Section 1, in existing methods the training step has been absent from the saliency computation step. As a workaround, many methods make their approaches training-aware by applying pruning gradually and interleaving it with training: Zhu & Gupta (2018) proposed an exponential schedule for pruning-during-training and Gale et al. (2019) showed its effectiveness in a broader range of tasks. Frankle & Carbin (2019) show that weight rewinding achieves better results when done in multiple prune-retrain steps. Lym et al. (2019) continuously apply structured pruning via group-lasso regularization while at the same time increasing batch sizes. You et al. (2020) find pruned architectures after a few epochs of training-and-pruning and monitoring a distance metric.
Meta-Gradients Backpropagation through gradients, and its first-order approximation, is also used in model-agnostic meta-learning literature (Finn et al., 2017; Zintgraf et al., 2019) where the objective is to find a model that can be adapted to new data in a few training steps. Similar to our setup, the meta-loss captures the trainability of a model, but additionally, the meta-gradients are used to update the network’s weights in a second loop. In self-supervised learning setting, Xiao et al. (2021) use meta-gradients to explicitly optimize a learn-to-generalize regularization term in nested meta-learning loops. Computing gradients-of-gradients is also used to regularize loss with a penalty on the gradients, for instance, to enforce Lipschitz continuity on the network (Gulrajani et al., 2017) or to control different norms of the gradients (Alizadeh et al., 2020).
6 DISCUSSION
Although pruning at initialization has the potential to greatly reduce the cost of training neural networks, existing methods have not lived up to their promise. We argue that this is, in part, because they do not account for the fact that the pruned network is going to be trained after it is pruned. We take this into account, using a saliency score that captures the effect of a pruning mask on the training procedure. As a result, our method is competitive not just with methods that prune before training, but also with methods that prune iteratively during training and those that prune after training. In principle, compressing neural networks at initialization has the potential to reduce energy and environmental costs of machine learning. Beyond our context, taking into account that methods which prune-at-convergence generally have to be fine-tuned, it is possible that our work could have further implications for these pruning methods as well (Molchanov et al., 2016; Wang et al., 2019).
ACKNOWLEDGMENTS
Milad Alizadeh is grateful for funding by the EPSRC (grant references EP/R512333/1) and Arm (via NPIF 2017 studentship). Shyam Tailor is supported by EPSRC grants EP/M50659X/1 and EP/S001530/1 (the MOA project) and the European Research Council via the REDIAL project (Grant Agreement ID: 805194). Luisa Zintgraf is supported by the 2017 Microsoft Research PhD Scholarship Program, and the 2020 Microsoft Research EMEA PhD Award. Joost van Amersfoort is grateful for funding by the EPSRC (grant reference EP/N509711/1) and Google-DeepMind. Sebastian Farquhar is supported by the EPSRC via the Centre for Doctoral Training in Cybersecurity at the University of Oxford as well as Christ Church, University of Oxford.
A EXPERIMENTAL SETUP
A.1 ARCHITECTURE DETAILS
We use standard VGG and ResNet models provided by torchvision throughout this work where possible. The ResNet-20 model, which is not commonly evaluated, was implemented to match the version used by Frankle et al. (2021) so that we could compare using the benchmark supplied by this paper.
For smaller datasets, it is common to patch models defined for ImageNet. Specifically, for ResNets, we replace the first convolution with one 3 × 3 filter size, and stride 1; the first max-pooling layer is replaced with an identity operation. For VGG, we follow the convention used by works such as FORCE (de Jorge et al., 2021). We do not change any convolutional layers, but we change the classifier to use a single global average pooling layer, followed by a single fully-connected layer.
A.2 TRAINING DETAILS
For CIFAR-10, CIFAR-100 and TinyImageNet we perform 3 meta-steps to calculate our saliency criteria. We train the resulting models for 200 epochs, with initial learning rate 0.1; we divide the learning rate by 10 at epochs 100 and 150. Weight decay was set to 5×10−4. Batch size for CIFAR10, CIFAR-100, and TinyImageNet was 256. For CIFAR-10 and CIFAR-100 we augment training data by applying random cropping (32× 32, padding 4), and horizontal flipping. For TinyImageNet we use the same procedure, with random cropping parameters set to 64× 64, padding 4. For ImageNet we train models for 100 epochs, with an initial learning rate of 0.1; we divide the learning rate by 10 at epochs 30, 60 and 90. Weight decay was set to 1× 10−4. Batch size was 256. We use the first order approximation to do pruning, and use 1024 steps for ResNet-50. For VGG-19 we use 2048 steps, but with batch size set to 128 (due to memory limitations, as our implementation only utilized a single GPU for meta-training). We apply random resizing, then crop the image to 224× 224, with horizontal flipping.
A.3 IMPLEMENTATIONS
In addition to our code, the reader may find it useful to reference the following repos from related work. Our experiments were performed using code derived from these implementations:
B NUMBERS FROM FIGURE 2
Sparsity (%) 20.0 36.0 48.8 59.0 67.2 73.8 79.0 83.2 86.6 89.3 91.4 93.1 94.5 95.6 96.5 97.2 97.7 98.2 LTR after Training 91.8 ± 0.2 91.9 ± 0.2 91.9 ± 0.2 91.7 ± 0.2 91.5 ± 0.1 91.4 ± 0.1 91.1 ± 0.1 90.6 ± 0.1 90.1 ± 0.0 89.2 ± 0.1 88.0 ± 0.2 86.8 ± 0.2 85.7 ± 0.1 84.4 ± 0.2 82.8 ± 0.1 81.2 ± 0.3 79.4 ± 0.3 77.3 ± 0.5 Magnitude after Training 92.2 ± 0.3 92.0 ± 0.2 92.0 ± 0.2 91.7 ± 0.1 91.5 ± 0.2 91.3 ± 0.2 91.1 ± 0.2 90.7 ± 0.2 90.2 ± 0.2 89.4 ± 0.2 88.7 ± 0.2 87.7 ± 0.2 86.5 ± 0.2 85.2 ± 0.2 83.5 ± 0.3 81.9 ± 0.3 80.4 ± 0.2 77.7 ± 0.4 Magnitude at Initialization 91.5 ± 0.2 91.2 ± 0.1 90.8 ± 0.1 90.7 ± 0.2 90.2 ± 0.1 89.8 ± 0.2 89.3 ± 0.2 88.6 ± 0.2 87.9 ± 0.3 87.0 ± 0.3 86.1 ± 0.2 85.2 ± 0.4 83.9 ± 0.2 82.5 ± 0.4 80.7 ± 0.5 79.1 ± 0.4 77.2 ± 0.4 74.5 ± 0.7 SNIP 91.8 ± 0.2 91.2 ± 0.3 90.9 ± 0.1 90.7 ± 0.1 90.1 ± 0.2 89.7 ± 0.3 89.0 ± 0.2 88.5 ± 0.3 87.7 ± 0.2 87.2 ± 0.4 85.8 ± 0.1 84.7 ± 0.5 83.8 ± 0.3 82.5 ± 0.4 80.9 ± 0.2 79.1 ± 0.2 77.3 ± 0.2 74.0 ± 0.5 GraSP 91.5 ± 0.1 91.3 ± 0.2 91.2 ± 0.1 90.6 ± 0.2 90.3 ± 0.2 89.6 ± 0.1 89.1 ± 0.2 88.4 ± 0.2 87.9 ± 0.1 87.0 ± 0.2 85.9 ± 0.1 85.1 ± 0.4 83.9 ± 0.4 82.8 ± 0.2 81.2 ± 0.2 79.7 ± 0.3 78.0 ± 0.3 76.0 ± 0.5 SynFlow 91.7 ± 0.1 91.3 ± 0.2 91.2 ± 0.1 90.8 ± 0.1 90.4 ± 0.2 89.8 ± 0.1 89.5 ± 0.3 88.9 ± 0.4 88.1 ± 0.1 87.4 ± 0.5 86.1 ± 0.2 85.4 ± 0.2 84.3 ± 0.2 82.9 ± 0.2 81.7 ± 0.2 80.0 ± 0.3 78.6 ± 0.4 76.4 ± 0.4 Random 91.6 ± 0.2 91.2 ± 0.2 90.8 ± 0.3 90.5 ± 0.2 89.8 ± 0.2 89.0 ± 0.4 88.4 ± 0.2 87.5 ± 0.3 86.6 ± 0.2 85.6 ± 0.3 84.3 ± 0.4 83.1 ± 0.4 81.6 ± 0.3 79.6 ± 0.4 74.2 ± 6.4 64.7 ± 9.7 56.9 ± 8.5 43.7 ± 12.5 ProsPr 92.3 ± 0.1 92.1 ± 0.0 91.7 ± 0.2 91.5 ± 0.1 91.0 ±0.2 90.5 ± 0.0 90.1 ± 0.1 89.6 ± 0.2 88.5 ± 0.5 87.8 ± 0.1 86.9 ± 0.3 85.5 ± 0.6 84.3 ± 0.2 83.0 ± 0.9 80.8 ± 0.5 79.6 ± 0.7 77.0 ± 0.8 74.2 ± 0.3
Table 5: Numerical results for VGG-16 on CIFAR-10
Sparsity (%) 20.0 36.0 48.8 59.0 67.2 73.8 79.0 83.2 86.6 89.3 91.4 93.1 94.5 95.6 96.5 97.2 97.7 98.2 LTR after Training 93.5 ± 0.1 93.6 ± 0.1 93.6 ± 0.1 93.6 ± 0.1 93.8 ± 0.1 93.6 ± 0.1 93.6 ± 0.1 93.8 ± 0.1 93.8 ± 0.1 93.7 ± 0.1 93.7 ± 0.1 93.8 ± 0.1 93.5 ± 0.2 93.4 ± 0.1 93.2 ± 0.1 93.0 ± 0.2 92.7 ± 0.1 92.1 ± 0.4 Magnitude after Training 93.9 ± 0.2 93.9 ± 0.2 93.8 ± 0.1 93.8 ± 0.1 93.9 ± 0.1 94.0 ± 0.2 93.8 ± 0.1 93.8 ± 0.1 93.9 ± 0.2 93.9 ± 0.2 93.8 ± 0.2 93.7 ± 0.2 93.5 ± 0.1 93.5 ± 0.1 93.3 ± 0.2 93.0 ± 0.1 92.9 ± 0.1 91.7 ± 0.8 Magnitude at Initialization 93.6 ± 0.2 93.4 ± 0.2 93.3 ± 0.1 93.2 ± 0.2 93.3 ± 0.3 93.0 ± 0.1 93.1 ± 0.1 92.9 ± 0.1 92.9 ± 0.2 92.7 ± 0.1 92.5 ± 0.2 92.3 ± 0.1 92.2 ± 0.2 92.0 ± 0.1 91.8 ± 0.2 91.5 ± 0.1 91.3 ± 0.3 90.9 ± 0.2 SNIP 93.6 ± 0.1 93.4 ± 0.1 93.3 ± 0.1 93.4 ± 0.2 93.3 ± 0.2 93.4 ± 0.1 93.1 ± 0.1 93.1 ± 0.1 93.2 ± 0.1 93.1 ± 0.1 92.9 ± 0.1 92.8 ± 0.2 92.8 ± 0.1 92.3 ± 0.2 92.2 ± 0.1 92.1 ± 0.1 91.7 ± 0.1 91.5 ± 0.1 GraSP 93.5 ± 0.1 93.4 ± 0.2 93.5 ± 0.0 93.3 ± 0.1 93.2 ± 0.2 93.3 ± 0.2 93.2 ± 0.1 93.0 ± 0.3 93.0 ± 0.1 92.7 ± 0.2 92.8 ± 0.1 92.4 ± 0.1 92.3 ± 0.1 92.2 ± 0.1 91.9 ± 0.1 91.6 ± 0.2 91.5 ± 0.0 91.2 ± 0.2 SynFlow 93.6 ± 0.2 93.6 ± 0.1 93.5 ± 0.1 93.4 ± 0.1 93.4 ± 0.2 93.5 ± 0.2 93.2 ± 0.1 93.2 ± 0.1 93.1 ± 0.1 92.9 ± 0.1 92.7 ± 0.2 92.5 ± 0.1 92.3 ± 0.1 92.0 ± 0.1 91.8 ± 0.3 91.3 ± 0.1 91.0 ± 0.2 90.6 ± 0.2 Random 93.6 ± 0.3 93.2 ± 0.1 93.2 ± 0.2 93.0 ± 0.2 92.7 ± 0.2 92.4 ± 0.2 92.2 ± 0.1 91.7 ± 0.1 91.2 ± 0.1 90.8 ± 0.2 90.3 ± 0.2 89.6 ± 0.2 88.8 ± 0.2 88.3 ± 0.4 87.6 ± 0.1 86.4 ± 0.2 86.0 ± 0.4 84.5 ± 0.4 ProsPr 93.7 ± 0.2 93.7 ± 0.1 93.9 ± 0.1 93.8 ± 0.1 93.8 ± 0.1 93.5 ± 0.2 93.6 ± 0.1 93.4 ± 0.3 93.5 ± 0.2 93.3 ± 0.1 93.0 ± 0.1 93.0 ± 0.1 92.8 ± 0.3 92.7 ± 0.1 92.6 ± 0.1 92.2 ± 0.1 92.1 ± 0.2 91.6 ± 0.4
Table 6: Numerical results for ResNet-18 on TinyImageNet
Sparsity (%) 20.0 36.0 48.8 59.0 67.2 73.8 79.0 83.2 86.6 89.3 91.4 93.1 94.5 95.6 96.5 97.2 97.7 98.2 LTR after Training 51.7 ± 0.2 51.4 ± 0.3 51.5 ± 0.4 52.1 ± 0.4 51.8 ± 0.4 52.0 ± 0.1 52.0 ± 0.1 52.0 ± 0.2 52.1 ± 0.3 52.0 ± 0.2 52.4 ± 0.2 51.8 ± 0.4 51.8 ± 0.6 51.4 ± 0.4 50.9 ± 0.2 49.3 ± 0.7 48.3 ± 0.7 46.0 ± 0.3 Magnitude after Training 51.7 ± 0.3 51.4 ± 0.1 51.7 ± 0.2 51.5 ± 0.3 51.7 ± 0.4 51.4 ± 0.5 51.1 ± 0.3 51.4 ± 0.4 51.3 ± 0.4 51.1 ± 0.6 51.7 ± 0.3 51.3 ± 0.3 51.8 ± 0.4 51.2 ± 0.3 51.1 ± 0.2 50.4 ± 0.2 49.0 ± 0.2 47.8 ± 0.5 Magnitude at Initialization 51.0 ± 0.3 51.2 ± 0.3 51.0 ± 0.2 50.5 ± 0.5 50.6 ± 0.3 50.0 ± 0.3 50.3 ± 0.2 50.3 ± 0.3 50.0 ± 0.1 49.8 ± 0.5 49.0 ± 0.1 48.3 ± 0.3 47.2 ± 0.2 46.2 ± 0.2 44.4 ± 0.5 42.2 ± 0.1 40.8 ± 0.4 38.1 ± 0.6 SNIP 51.4 ± 0.2 51.5 ± 0.3 51.4 ± 0.3 51.3 ± 0.5 51.6 ± 0.4 51.4 ± 0.5 51.9 ± 0.6 51.5 ± 0.3 51.0 ± 0.2 51.2 ± 0.7 50.6 ± 0.3 50.1 ± 0.3 49.2 ± 0.3 47.8 ± 0.2 46.7 ± 0.1 45.2 ± 0.4 44.5 ± 0.3 42.3 ± 0.3 GraSP 49.8 ± 0.4 49.1 ± 0.3 49.5 ± 0.2 49.5 ± 0.4 49.2 ± 0.1 49.5 ± 0.2 48.7 ± 0.1 49.0 ± 0.5 48.8 ± 0.4 48.3 ± 0.1 48.2 ± 0.1 47.7 ± 0.2 46.5 ± 0.1 45.5 ± 0.7 44.9 ± 0.2 44.1 ± 1.0 42.9 ± 0.5 41.0 ± 0.1 SynFlow 51.8 ± 0.3 51.6 ± 0.3 51.7 ± 0.7 51.8 ± 0.2 51.3 ± 0.4 51.3 ± 0.4 51.5 ± 0.2 51.0 ± 0.4 50.2 ± 0.4 50.4 ± 0.3 49.1 ± 0.0 48.0 ± 0.5 46.7 ± 0.7 45.6 ± 0.0 44.0 ± 0.2 42.2 ± 0.3 40.0 ± 0.1 38.2 ± 0.5 Random 50.6 ± 0.5 50.1 ± 0.2 49.9 ± 0.3 48.7 ± 0.2 48.0 ± 0.4 48.0 ± 0.6 46.4 ± 0.1 45.9 ± 0.5 44.7 ± 0.2 43.6 ± 0.3 42.7 ± 0.2 41.4 ± 0.4 40.2 ± 0.2 37.2 ± 0.2 36.2 ± 0.7 34.0 ± 0.4 32.2 ± 0.5 30.0 ± 0.3 ProsPr 51.8 ± 0.4 51.4 ± 0.7 51.2 ± 0.9 52.0 ± 0.2 51.8 ± 0.1 51.2 ± 0.4 52.0 ± 0.3 51.6 ± 0.7 51.1 ± 0.4 50.7 ± 0.6 50.9 ± 0.3 50.8 ± 1.2 51.1 ± 0.7 50.8 ± 0.5 50.8 ± 0.3 49.6 ± 0.6 49.2 ± 0.2 46.9 ± 0.7
C WALL CLOCK TIME FOR STRUCTURE PRUNING AT INITIALIZATION
When pruning is done at convergence, the benefits of having a compressed model (in terms of memory saving and speed-up) can only be utilized at inference/deployment time. However, with pruning-at-initialization these benefits can be reaped during training as well. This is especially true in the case of structured pruning, where pruning results in weight and convolutional kernels with smaller dimensions (as opposed to unstructured pruning, where we end up with sparse weights with the original dimensions). This means that in addition to memory savings, training take fewer operations which speeds up training. To evaluate the benefits of training at initialization in terms of speed improvements we measured the wall-time training time on an NVIDIA RTX 2080 Ti GPU for the architectures used in Section 4.3 (an additionally on ImageNet dataset). The results in Table 7 show that structured pruning with ProsPr can significantly reduce the overall training time.
D RESULTS ON SEGMENTATION TASK
An interesting, albeit less common, application for pruning models is within the context of segmentation. In a recent paper Jeong et al. (2021) train and prune the U-Net (Ronneberger et al., 2015) architecture on two image datasets from the Cell Tracking Challenge (PhC-C2DH-U373 and DICC2DH-HeLa). They use the classic multi-step approach of gradually applying magnitude-pruning interleaved with fine-tuning stages. To evaluate the flexibility of our method we used meta-gradients at the beginning of training (on a randomly initialized U-Net), prune in a single shot, and train the network once for the same number of epochs (50). We kept the training set-up the same as the baseline by Jeong et al. (2021) (i.e., resizing images and segmentation maps to (256,256), setting aside 30% of training data for validation) and similarly aim to find the highest prune ratio that does not result in IOU degradation. We report intersection-over-union (IOU) metric for the two datasets in Tables 8 and 9:
Table 8: Mean-IOU on U373 validation
Method Prune Ratio Mean IOU
Unpruned - 0.9371 Jeong et al. 95% 0.9368 ProsPr 97% 0.9369
Table 9: Mean-IOU on HeLa validation
Method Prune Ratio Mean IOU
Unpruned - 0.7514 Jeong et al. 81.8% 0.7411 ProsPr 90% 0.7491
These results show that our method works as well (or better) compared to this compute-expensive baseline, in the sense that we can prune more parameters while keeping the IOU score the same.
E SELF-SUPERVISED INITIALIZATION
To evaluate the robustness and consistency of our method against non-random initialization we ran experiments using BYOL to learn representations from unlabeled samples (Grill et al., 2020). We used ResNet-18 as a backbone and trained for 1000 epochs with an embedding size of 64. Unlike the vanilla ResNet-18 architecture used in Section 4.3 we used the commonly-used modified version of ResNet-18 for smaller inputs (removing the first pooling layer and modifying the first convolutional layer to have kernel kernel size of 3, stride of 1, and padding size of 1). We then used this trained ResNet18 as the initialization for our meta-gradient pruning method. After the pruning step, all layers were trained as before until convergence. All training hyper-parameters were kept as before. The results (final test accuracies for 95% pruning) are summarized in Table 10.
These results show the robustness of our method for this particular self-supervised initialization. Starting from a learned representation can be challenging because these representations are much closer to weight values at convergence, and therefore the magnitude of their gradients is significantly smaller than randomly initialized weights. However, this is less of a problem for meta-gradients as their magnitude is still significant due to back-propagation through training steps. This can be seen in Figure 3 which shows the L2 norm of gradients of each layer of a BYOL-initialized ResNet-18 for meta-gradients compared to normal gradients. It can be seen that meta-gradients provide a stronger signal compared to normal gradients. | 1. What is the problem addressed by the paper in the context of neural network optimization?
2. What is the proposed solution to the problem, and how does it differ from existing methods?
3. What is the key insight or idea behind the proposed approach?
4. How effective is the proposed method in addressing the problem, and what are its limitations?
5. Are there any potential applications or extensions of the proposed method beyond the specific task discussed in the paper? | Summary Of The Paper
Review | Summary Of The Paper
This paper proposes Prospect Pruning (ProsPr) to handle the problem of short-sightedness of existing methods. The main idea is to use meta-gradients through the first few steps of optimization to determine which weights to prune.
Review
Since the current methods are insufficient to enable this optimization and lead to a large degradation in model performance, the authors proposed a new method to identify a fundamental limitation, namely that their saliency criteria look at a single step at the start of training without considering the trainability of the network. The paper tackles an important problem.
The paper is well organized and easy to follow.
It would be better if the authors can validate their method on more tasks, such as human segmentation and image denoising, etc. |
ICLR | Title
Prospect Pruning: Finding Trainable Weights at Initialization using Meta-Gradients
Abstract
Pruning neural networks at initialization would enable us to find sparse models that retain the accuracy of the original network while consuming fewer computational resources for training and inference. However, current methods are insufficient to enable this optimization and lead to a large degradation in model performance. In this paper, we identify a fundamental limitation in the formulation of current methods, namely that their saliency criteria look at a single step at the start of training without taking into account the trainability of the network. While pruning iteratively and gradually has been shown to improve pruning performance, explicit consideration of the training stage that will immediately follow pruning has so far been absent from the computation of the saliency criterion. To overcome the short-sightedness of existing methods, we propose Prospect Pruning (ProsPr), which uses meta-gradients through the first few steps of optimization to determine which weights to prune. ProsPr combines an estimate of the higherorder effects of pruning on the loss and the optimization trajectory to identify the trainable sub-network. Our method achieves state-of-the-art pruning performance on a variety of vision classification tasks, with less data and in a single shot compared to existing pruning-at-initialization methods. Our code is available online at https://github.com/mil-ad/prospr.
1 INTRODUCTION
Pruning at initialization—where we remove weights from a model before training begins—is a recent and promising area of research that enables us to enjoy the benefits of pruning at training time, and which may aid our understanding of training deep neural networks.
Frankle & Carbin (2019) provide empirical evidence for the existence of sparse sub-networks that can be trained from initialization and achieve accuracies comparable to the original network. These “winning tickets” were originally found in an iterative process where, in each iteration, the network is trained to full convergence followed by pruning a subset of the weights by magnitude. The values of the remaining weights are then rewound to their value at initialization, and the process is repeated iteratively until the desired sparsity level is achieved.
This process, known as Lottery Ticket Rewinding (LTR), is very compute-intensive and is prone to failures. For instance, Frankle et al. (2020) show better results by rewinding weights not all the way back to initialization, but to early stages of training instead. LTR is especially prone to failure for more difficult problems (e.g., training on ImageNet), where we must rewind weights to their state several epochs into training.
A recent line of work proposes alternative practical solutions to identify these sub-networks before training begins, without the cost of retraining the network iteratively Lee et al. (2018); Wang et al. (2020); de Jorge et al. (2021); Tanaka et al. (2020). This class of methods uses gradients to assess
∗Corresponding author. Contact at milad.alizadeh@cs.ox.ac.uk
the importance of neural network weights. These gradients are often known as Synaptic Saliencies and are used to estimate the effect of pruning a single parameter in isolation on various objectives, typically the loss function. This objective is not so different from classical pruning-at-convergence methods, but the gradients for a well-trained model are small; therefore these methods must inspect higher-order metrics such as the Hessian to estimate the pruning effect (LeCun et al., 1990; Hassibi & Stork, 1993). Pruning at initialization is desirable because the benefits of pruning (in terms of memory and speed) can be reaped during training, rather than only at inference/deployment time.
However, the performance of prune-at-init methods remains poor: the degradation in accuracy is still significant compared to training the full model and LTR, making these methods impractical for many real-world problems (Frankle et al., 2021). In this paper, we identify a fundamental limitation in the objective formulation of current methods, namely that saliency criteria do not take into account the fact that the model is going to be trained after the pruning step. If our aim was to simply prune a subset of weights without affecting the loss, then these saliency criteria are estimating the correct objective. However, this estimate does not take into account that we are going to train the weights after we prune them. We need a metric that captures the trainability of the weights during the optimization steps, rather than a single myopic estimate.
Many methods attempt to overcome this by pruning gradually and/or adding training steps between iterative pruning steps (Zhu & Gupta, 2018; You et al., 2020; de Jorge et al., 2021). Although this approach has been shown to be effective, it is expensive and cumbersome in practice and ultimately is an indirect approximation to the trainability criteria we are looking to incorporate into our objective.
In this paper, we propose Prospect Pruning (ProsPr), a new pruning-at-init method that learns from the first few steps of optimization which parameters to prune. We explicitly formulate our saliency criteria to account for the fact that the network will be trained after pruning. More precisely, ProsPr uses meta-gradients by backpropagating through the first few model updates in order to estimate the effect the initial pruning parameters have on the loss after a few gradient descent steps. Effectively this enables us to account for both higher-order effects of pruning weights on the loss, as well as the trainability of individual weights. Similar to other methods we apply pruning to initialization values of weights and train our models from scratch. In summary, our contributions are:
• We identify a key limitation in prior saliency criteria for pruning neural networks—namely that they do not explicitly incorporate trainability-after-pruning into their criteria.
• We propose a new pruning-at-init method, ProsPr, that uses meta-gradients over the first few training steps to bridge the gap between pruning and training.
• We show empirically that ProsPr achieves higher accuracy compared to existing pruningat-init methods. Unlike other methods, our approach is single shot in the sense that the pruning is applied to the network initial weights in a single step.
2 BACKGROUND
In this section we review the key concepts that our method builds upon. We delay comparisons to other pruning techniques in the literature to Section 5.
Classic post-training pruning methods aim to identify and remove network weights with the least impact on the loss (LeCun et al., 1990; Hassibi & Stork, 1993). They typically use the Taylor expansion of the loss with respect to parameters to define a saliency score for each parameter: δL ≈ ∇θL⊤δθ + 12δθ
⊤H δθ, where H = ∇2θL is the Hessian matrix. When the network has converged, the first-order term in the expansion is negligible, and hence these methods resort to using H.
Lee et al. (2018) introduce SNIP, and show that the same objective of minimizing the change in loss can be used at initialization to obtain a trainable pruned network. At initialization, the first-order gradients ∇θ in the local quadratic approximation are still significant, so higher-order terms can be ignored. Hence the computation of the parameter saliencies can be done using backpropagation.
The Taylor expansion approximates the effect of small additive perturbations to the loss. To better approximate the effect of removing a weight, Lee et al. (2018) attach a multiplicative all-one mask to the computation graph of each weight. This does not change the forward-pass of the network, but it enables us to form the Taylor expansion around the mask values, rather than the weights, to estimate the effect of changing the mask values from 1 to 0. More specifically, SNIP computes the
saliency scores according to:
sj = |gj(w,D)|∑m k=1 |gk(w,D)| , (1)
with
gj(w,D) = ∂L(c⊙w,D)
∂cj , (2)
where m is the number of weights in the network, c ∈ {0, 1}m is the pruning mask (initialised to 1 above), D is the training dataset, w are the neural network weights, L is the loss function, and ⊙ is the Hadamard product. These saliency scores are computed before training the network, using one (or more) mini-batches from the training set. The global Top-K weights with the highest saliency scores are retained (cj = 1), and all other weights are pruned (cj = 0), before the network is trained.
Our method, to be introduced in Section 3, also relies on computing the saliency scores for each mask element, but uses a more sophisticated loss function to incorporate the notion of trainability.
3 OUR METHOD: PROSPR
In this section we introduce our method, Prospect Pruning (ProsPr). We note that for the problem of pruning at initialization, the pruning step is immediately followed by training. Therefore, pruning should take into account the trainability of a weight, instead of only its immediate impact on the loss before training. In other words, we want to be able to identify weights that are not only important at initialization, but which may be useful for reducing the loss during training. To this end, we propose to estimate the effect of pruning on the loss over several steps of gradient descent at the beginning of training, rather than the changes in loss at initialization.
More specifically, ProsPr models how training would happen by performing multiple (M) iterations of backpropagation and weight updates—like during normal training. We can then backpropagate through the entire computation graph, from the loss several steps into training, back to the original mask, since the gradient descent procedure is itself differentiable. Once the pruning mask is computed, we rewind the weights back to their values at initialization and train the pruned network. The gradient-of-gradients is called a meta-gradient. This algorithm is illustrated visually in Figure 1.
The higher-order information in the meta-gradient includes interactions between the weights during training. When pruning at initialization, our ultimate goal is to pick a pruned model, A, which is more trainable than an alternative pruned model B. That means we want the loss L(ŷA, y) to be lower than L(ŷB , y) at convergence (for a fixed pruning ratio). Finding the optimal pruning mask is generally infeasible since the training horizon is long (i.e., evaluation is costly) and the space of possible pruning masks is large. Unlike other methods that must compute the saliency scores iteratively, we can use the meta-gradients to compute the pruning mask in one shot. This picks a line in loss-space, which more closely predicts the eventual actual loss. This is because it smooths out over more steps, and takes into account interactions between weights in the training dynamics. Crucially, in the limit of large M, the match to the ultimate objective is exact.
3.1 SALIENCY SCORES VIA META-GRADIENTS
We now introduce ProsPr formally. After initialising the network weights randomly to obtain winit, we apply a weight mask to the initial weights,
w0 = c⊙winit. (3)
This weight mask contains only ones, c = 1, as in SNIP (Lee et al., 2018), and represents the connectivity of the corresponding weights.
We then sample M+1 batches of data Di ∼ Dtrain (i ∈ {0, . . . ,M}; M ≥ 1) for the pruning step, and perform M weight updates1,
w1 = w0 − α∇w0L(w0,D0) (4) ...
wM = wM−1 − α∇wM−1L(wM−1,DM−1). (5) Then, we compute a meta-gradient that backpropagates through these updates. Specifically, we compute the gradient of the final loss w.r.t. the initial mask,
∇c L(wM ,DM ). (6) Using the chain rule, we can write out the form of the meta-gradient beginning from the last step:
∇cL(wM ,D) = ∇wML(wM ,D)(∇cwM ), (7) repeating for each step until we reach the zero’th step whose gradient is trivial,
= ∇wML(wM ,D)(∇wM−1wM ) . . . (∇w0w1)(∇cw0) (8) = ∇wML(wM ,D)(∇wM−1wM ) . . . (∇w0w1)(∇c(c⊙winit)) (9)
= ∇wML(wM ,D)
[ M∏
m=1
(∇wm−1wm) ] winit. (10)
In practice, we can compute the meta-gradients by relying on automatic differentiation software such as PyTorch (Paszke et al., 2019). However, care must be taken to ensure that weights at each step are kept in memory so that the entire computation graph, including gradients, is visible to the automatic differentiation software. The saliency scores are now given by
sj = |gj(w,D)|∑m k=1 |gk(w,D)| , (11)
with
gj(w,D) = ∂L(wM ,D)
∂cj , (12)
where wM is a function of c. Equation (12) stands in contrast to SNIP, where the saliency is computed using the loss at c ·winit rather than wM . The saliency scores are then used to prune the initial weights winit: the ones with the highest saliency scores are retained (cj = 1), and all other weights are pruned (cj = 0). Finally, the network is trained with the pruned weights ŵinit.
Algorithm 1 summarises the proposed method, ProsPr. 1We formalise the weight updates using vanilla SGD here; in practice these may be different when using approaches such as momentum or BatchNorm (Ioffe & Szegedy, 2015). Since our implementation relies on automatic differentiation in PyTorch (Paszke et al., 2019), we can use any type of update, as long as it is differentiable w.r.t. the initial mask c.
Algorithm 1 ProsPr Pseudo-Code 1: Inputs: a training dataset Dtrain, number of initial training steps M , number of main training steps N
(M ≪ N ), learning rate α 2: Initialise: network weights winit
3: cinit = 1 ▷ Initialise mask with ones 4: w0 = cinit ⊙winit ▷ Apply mask to initial weights 5: for k = 0, . . . ,M − 1 do 6: Dk ∼ Dtrain ▷ Sample batch of data 7: wi+1 = wi − α∇wL(wi,Dk) ▷ Update network weights 8: end for
9: gj(w,D) = ∂L(wM ,D)/∂cj ▷ Compute meta-gradient
10: sj = |gj(w,D)|∑m
k=1 |gk(w,D)|
▷ Compute saliency scores
11: Determine the k-th largest element in s, sk.
12: cprune = { 1, if cj ≥ sk 0, otherwise
▷ Set pruning mask
13: ŵ0 = cprune ⊙winit ▷ Apply mask to initial weights winit
14: for i = 1, . . . , N do ▷ Train pruned model 15: ŵi+1 = ŵi − α∇wL(ŵi,D) 16: end for
3.2 FIRST-ORDER APPROXIMATION
Taking the meta-gradient through many model updates (Equation (6)) can be memory intensive: in the forward pass, all gradients of the individual update steps need to be retained in memory to then be able to backpropagate all the way to the initial mask. However, we only need to perform a few steps2 at the beginning of training so in practice we can perform the pruning step on CPU which usually has access to more memory compared to a GPU. We apply this approach in our own experiments, with overheads of around 30 seconds being observed for the pruning step.
Alternatively, when the number of training steps needs to be large we can use the following firstorder approximation. Using Equation (10), the meta-gradient is:
∇cL(wM ,DM ) = ∇wML(wM ,DM )
[ M∏
m=1
(∇wm−1wm) ] winit, (13)
writing wm in terms of wm−1 following SGD,
= ∇wML(wM ,DM )
[ M∏
m=1
∇wm−1(wm−1 − α∇wm−1L(wm−1;Dm)) ] winit,
(14) carrying through the partial derivative,
= ∇wML(wM ,DM )
[ M∏
m=1
I − α∇2wm−1L(wm−1;Dm) ] winit, (15)
and finally dropping small terms for sufficiently small learning rates,
≈ ∇wML(wM ,DM )
[ M∏
m=1
I ] winit, (16)
= ∇wML(wM ,DM ) winit. (17) In the second-to-last step, we drop the higher-order terms, which gives us a first-order approximation of the meta-gradient3.
2We use 3 steps for experiments on CIFAR-10, CIFAR-100 and TinyImageNet datasets 3Note that this approximation also works for optimisers other than vanilla SGD (e.g., Adam, Adamw, Ad-
abound), except that the term which is dropped (r.h.s. of Equation Equation (15)) looks slightly different.
With this approximation, we only need to save the initial weight vector winit in memory and multiply it with the final gradient. This approximation can be crude when the Laplacian terms are large, but with a sufficiently small learning rate it becomes precise. The approximation allows us to take many more intermediate gradient-steps which can be beneficial for performance when the training dataset has many classes, as we will see in Section 4.2.
4 EXPERIMENTS
We empirically evaluate the performance of our method, ProsPr, compared to various vision classification baselines across different architectures and datasets. In supplementary sections we show effectiveness of our method on image segmentation tasks (Appendix D) and when using selfsupervised initialization (Appendix E). We provide details of our hyper-parameters, experiment setup, and implementation details in Appendix A.
4.1 RESULTS ON CIFAR AND TINY-IMAGENET
In recent work, Frankle et al. (2021) extensively study and evaluate different pruning-at-initialization methods under various effects such as weight re-initialization, weight shuffling, and score inversion. They report the best achievable results by these methods and highlight the gap between their performance and two pruning-at-convergence methods, weight rewinding and magnitude pruning (Renda et al., 2020; Frankle et al., 2020).
In Figure 2 we evaluate ProsPr on this benchmark using ResNet-20 and VGG-16 on CIFAR-10, and ResNet-18 on Tiny-ImageNet. It can be seen that ProsPr reduces the performance gap, especially at higher sparsity levels, and in some cases exceeds the accuracy of pruning-after-convergence methods. Full results are also summarised in Appendix B.
This is a remarkable achievement: ProsPr is the first work to close the gap to methods that prune after training. Previous works that prune at the start have not been able to outperform methods that prune after training on any settings, including smaller datasets such as CIFAR-10 or Tiny-ImageNet. It is also important to note that other baselines that have comparable accuracies are all iterative methods. ProsPr is the only method that can do this in a single shot after using only 3 steps batch-sizes of 512
in the inner-loop before computing the meta-gradients. In total, we only use 4 batches of data. We also do not do any averaging of scores by repeating the method multiple times.
The performance in these small datasets comes from the fact that ProsPr computes higher-order gradients. While there are other iterative methods that can work without any data, their effect is mostly a more graceful degradation at extreme pruning ratios as opposed to best accuracy at more practical sparsity levels. One example is SynFlow which is similar to FORCE but uses an all-one input tensor instead of samples from the training set (Tanaka et al., 2020).
4.2 RESULTS ON IMAGENET DATASET
To evaluate the performance of ProsPr on more difficult tasks we run experiments on the larger ImageNet dataset. Extending gradient-based pruning methods to this dataset poses several challenges.
Number of classes In synaptic-saliency methods, the mini batches must have enough examples from all classes in the dataset. Wang et al. (2020) recommend using class-balanced mini-batches sized ten times the number of classes. In datasets with few classes this is not an issue and even a single batch includes multiple examples per class. This is one reason why methods like SNIP work with a single batch, and why we kept the number of steps in ProsPr’s inner loop fixed to only 3. ImageNet however has 1,000 classes, and using a single or a handful of small batches is inadequate. Previous methods such as FORCE, GraSP, or SynFlow avoid this problem by repeating the algorithm with new data batches and averaging the saliency scores. In ProsPr we instead increase the number of updates before computing the meta-gradients, ensuring they flow through enough data. Computing meta-gradients through many steps however poses new challenges.
Gradient degradation We start to see gradient stability issues when computing gradients over deep loops. Gradient degradation problems, i.e., vanishing and exploding gradients, have also been observed in other fields that use meta-gradients such as Meta-Learning. Many solutions have been proposed to stabilize gradients when the length of loop increases beyond 4 or 5 steps, although this remains an open area of research (Antoniou et al., 2019).
Computation Complexity For ImageNet we must make the inner loop hundreds of steps deep to achieve balanced data representation. In addition to stability issues, backpropagating through hundreds of steps is very compute intensive.
Therefore for our experiments on ImageNet we use the first-order approximation of ProsPr (Sec 3.2). We evaluate ProsPr using ResNet-50 and VGG-19 architectures and compare against state-of-the-art methods FORCE and Iter-SNIP introduced by de Jorge et al. (2021). We include multi-batch versions of SNIP and GraSP (SNIP-MB and GraSP-MB) to provide a fair comparison to iterative methods, which partially prune several times during training, in terms of the amount of data presented to the method. We use 1024 steps with a batch size of 256 (i.e. 262,144 samples) for ResNet-50. For VGG-19, a much larger model, and which requires more GPU memory we do 256 steps with batch size of 128. This is still far fewer samples than other methods. Force, for example, gradually prunes
in 60 steps, where each step involves computing and averaging scores over 40 batches of size 256, i.e. performing backpropagation 2400 times and showing 614,400 samples to the algorithm.
Table 1 shows our results compared to the baselines reported by de Jorge et al. (2021). First-order ProsPr exceeds previous results in all configurations except one, where it is outperformed by GraSP. Note the surprisingly good performance of random pruning of ResNets, which was also observed by de Jorge et al. (2021). This could be explained by the fact that VGG-19 is a much larger architecture with 143.6 million parameters, compared to 15.5 million in ResNet-50s. More specifically the final three dense layers of VGG-19 constitute 86% of its total prunable parameters. The convolution layers of VGG constitute only 14% of the prunable weights. Pruning methods are therefore able to keep more of the convolution weights and instead prune extensively from the over-parametrized dense layers. ResNet architectures on the hand have a single dense classifier at the end.
4.3 STRUCTURED PRUNING
We also evaluate ProsPr in the structured pruning setup where instead of pruning individual weights, entire channels (or columns of linear layers) are removed. This is a more restricted setup, however it offers memory savings and reduces the computational cost of training and inference.
Adopting ProsPr for structured pruning is as simple as changing the shape of the pruning mask c in Eq 3 to have one entry per channel (or column of the weight matrix). We evaluate our method against 3SP, a method that extends SNIP to structured pruning (van Amersfoort et al., 2020). Our results are summarized in Table 2 which show accuracy improvements in all scenarios. In Appendix C we also evaluate wall-clock improvements in training time as a result of structured pruning at initialization.
4.4 NUMBER OF META STEPS
Finally, we evaluate ProsPr when using a varying number of meta steps, which gives insight into whether using meta-gradients is beneficial. We repeated experiments from Section 4.3 but this time we vary the depth of training steps between 0 and 3. The results in Table 3 show that the final accuracy consistently increases as we increase the depth of the training, showing the effectiveness of meta-gradients. We used the same data batch in all M training steps to isolate the effect of M, while in other experiments we use a new batch in every step.
In theory increasing the number of training steps should always help and match the ultimate objective (estimating the loss after many epochs of training) in the limit. However, in practice increasing the number of steps beyond 3 poses a lot of gradient stability issues (and is computationally expensive). These issues have been also identified in the meta-learning literature (Antoniou et al., 2019).
5 RELATED WORK
Pruning at initialization Several works extend the approach proposed by Lee et al. (2018). de Jorge et al. (2021) evaluate SNIP objective in a loop in which pruned parameters still receive gradients and therefore have a chance to get un-pruned. The gradual pruning helps avoid the layercollapse issue, and their method, known as FORCE, achieves better performance at extreme sparsity levels. Tanaka et al. (2020) provide theoretical justification for why iteratively pruning helps with the layer-collapse issue and propose a data-free version of the method where an all-one input tensor is used instead of real training data. Wang et al. (2020) propose an alternative criterion to minimizing changes in the loss and instead argue for preserving the gradient flow. Their method, GraSP, keeps weights that contribute most to the norm of the gradients. van Amersfoort et al. (2020) extends SNIP and GraSP to structured pruning to make training and inference faster. They further augment the scores by their compute cost to push the pruning decision towards more FLOPS reduction.
Gradual pruning As discussed in Section 1, in existing methods the training step has been absent from the saliency computation step. As a workaround, many methods make their approaches training-aware by applying pruning gradually and interleaving it with training: Zhu & Gupta (2018) proposed an exponential schedule for pruning-during-training and Gale et al. (2019) showed its effectiveness in a broader range of tasks. Frankle & Carbin (2019) show that weight rewinding achieves better results when done in multiple prune-retrain steps. Lym et al. (2019) continuously apply structured pruning via group-lasso regularization while at the same time increasing batch sizes. You et al. (2020) find pruned architectures after a few epochs of training-and-pruning and monitoring a distance metric.
Meta-Gradients Backpropagation through gradients, and its first-order approximation, is also used in model-agnostic meta-learning literature (Finn et al., 2017; Zintgraf et al., 2019) where the objective is to find a model that can be adapted to new data in a few training steps. Similar to our setup, the meta-loss captures the trainability of a model, but additionally, the meta-gradients are used to update the network’s weights in a second loop. In self-supervised learning setting, Xiao et al. (2021) use meta-gradients to explicitly optimize a learn-to-generalize regularization term in nested meta-learning loops. Computing gradients-of-gradients is also used to regularize loss with a penalty on the gradients, for instance, to enforce Lipschitz continuity on the network (Gulrajani et al., 2017) or to control different norms of the gradients (Alizadeh et al., 2020).
6 DISCUSSION
Although pruning at initialization has the potential to greatly reduce the cost of training neural networks, existing methods have not lived up to their promise. We argue that this is, in part, because they do not account for the fact that the pruned network is going to be trained after it is pruned. We take this into account, using a saliency score that captures the effect of a pruning mask on the training procedure. As a result, our method is competitive not just with methods that prune before training, but also with methods that prune iteratively during training and those that prune after training. In principle, compressing neural networks at initialization has the potential to reduce energy and environmental costs of machine learning. Beyond our context, taking into account that methods which prune-at-convergence generally have to be fine-tuned, it is possible that our work could have further implications for these pruning methods as well (Molchanov et al., 2016; Wang et al., 2019).
ACKNOWLEDGMENTS
Milad Alizadeh is grateful for funding by the EPSRC (grant references EP/R512333/1) and Arm (via NPIF 2017 studentship). Shyam Tailor is supported by EPSRC grants EP/M50659X/1 and EP/S001530/1 (the MOA project) and the European Research Council via the REDIAL project (Grant Agreement ID: 805194). Luisa Zintgraf is supported by the 2017 Microsoft Research PhD Scholarship Program, and the 2020 Microsoft Research EMEA PhD Award. Joost van Amersfoort is grateful for funding by the EPSRC (grant reference EP/N509711/1) and Google-DeepMind. Sebastian Farquhar is supported by the EPSRC via the Centre for Doctoral Training in Cybersecurity at the University of Oxford as well as Christ Church, University of Oxford.
A EXPERIMENTAL SETUP
A.1 ARCHITECTURE DETAILS
We use standard VGG and ResNet models provided by torchvision throughout this work where possible. The ResNet-20 model, which is not commonly evaluated, was implemented to match the version used by Frankle et al. (2021) so that we could compare using the benchmark supplied by this paper.
For smaller datasets, it is common to patch models defined for ImageNet. Specifically, for ResNets, we replace the first convolution with one 3 × 3 filter size, and stride 1; the first max-pooling layer is replaced with an identity operation. For VGG, we follow the convention used by works such as FORCE (de Jorge et al., 2021). We do not change any convolutional layers, but we change the classifier to use a single global average pooling layer, followed by a single fully-connected layer.
A.2 TRAINING DETAILS
For CIFAR-10, CIFAR-100 and TinyImageNet we perform 3 meta-steps to calculate our saliency criteria. We train the resulting models for 200 epochs, with initial learning rate 0.1; we divide the learning rate by 10 at epochs 100 and 150. Weight decay was set to 5×10−4. Batch size for CIFAR10, CIFAR-100, and TinyImageNet was 256. For CIFAR-10 and CIFAR-100 we augment training data by applying random cropping (32× 32, padding 4), and horizontal flipping. For TinyImageNet we use the same procedure, with random cropping parameters set to 64× 64, padding 4. For ImageNet we train models for 100 epochs, with an initial learning rate of 0.1; we divide the learning rate by 10 at epochs 30, 60 and 90. Weight decay was set to 1× 10−4. Batch size was 256. We use the first order approximation to do pruning, and use 1024 steps for ResNet-50. For VGG-19 we use 2048 steps, but with batch size set to 128 (due to memory limitations, as our implementation only utilized a single GPU for meta-training). We apply random resizing, then crop the image to 224× 224, with horizontal flipping.
A.3 IMPLEMENTATIONS
In addition to our code, the reader may find it useful to reference the following repos from related work. Our experiments were performed using code derived from these implementations:
B NUMBERS FROM FIGURE 2
Sparsity (%) 20.0 36.0 48.8 59.0 67.2 73.8 79.0 83.2 86.6 89.3 91.4 93.1 94.5 95.6 96.5 97.2 97.7 98.2 LTR after Training 91.8 ± 0.2 91.9 ± 0.2 91.9 ± 0.2 91.7 ± 0.2 91.5 ± 0.1 91.4 ± 0.1 91.1 ± 0.1 90.6 ± 0.1 90.1 ± 0.0 89.2 ± 0.1 88.0 ± 0.2 86.8 ± 0.2 85.7 ± 0.1 84.4 ± 0.2 82.8 ± 0.1 81.2 ± 0.3 79.4 ± 0.3 77.3 ± 0.5 Magnitude after Training 92.2 ± 0.3 92.0 ± 0.2 92.0 ± 0.2 91.7 ± 0.1 91.5 ± 0.2 91.3 ± 0.2 91.1 ± 0.2 90.7 ± 0.2 90.2 ± 0.2 89.4 ± 0.2 88.7 ± 0.2 87.7 ± 0.2 86.5 ± 0.2 85.2 ± 0.2 83.5 ± 0.3 81.9 ± 0.3 80.4 ± 0.2 77.7 ± 0.4 Magnitude at Initialization 91.5 ± 0.2 91.2 ± 0.1 90.8 ± 0.1 90.7 ± 0.2 90.2 ± 0.1 89.8 ± 0.2 89.3 ± 0.2 88.6 ± 0.2 87.9 ± 0.3 87.0 ± 0.3 86.1 ± 0.2 85.2 ± 0.4 83.9 ± 0.2 82.5 ± 0.4 80.7 ± 0.5 79.1 ± 0.4 77.2 ± 0.4 74.5 ± 0.7 SNIP 91.8 ± 0.2 91.2 ± 0.3 90.9 ± 0.1 90.7 ± 0.1 90.1 ± 0.2 89.7 ± 0.3 89.0 ± 0.2 88.5 ± 0.3 87.7 ± 0.2 87.2 ± 0.4 85.8 ± 0.1 84.7 ± 0.5 83.8 ± 0.3 82.5 ± 0.4 80.9 ± 0.2 79.1 ± 0.2 77.3 ± 0.2 74.0 ± 0.5 GraSP 91.5 ± 0.1 91.3 ± 0.2 91.2 ± 0.1 90.6 ± 0.2 90.3 ± 0.2 89.6 ± 0.1 89.1 ± 0.2 88.4 ± 0.2 87.9 ± 0.1 87.0 ± 0.2 85.9 ± 0.1 85.1 ± 0.4 83.9 ± 0.4 82.8 ± 0.2 81.2 ± 0.2 79.7 ± 0.3 78.0 ± 0.3 76.0 ± 0.5 SynFlow 91.7 ± 0.1 91.3 ± 0.2 91.2 ± 0.1 90.8 ± 0.1 90.4 ± 0.2 89.8 ± 0.1 89.5 ± 0.3 88.9 ± 0.4 88.1 ± 0.1 87.4 ± 0.5 86.1 ± 0.2 85.4 ± 0.2 84.3 ± 0.2 82.9 ± 0.2 81.7 ± 0.2 80.0 ± 0.3 78.6 ± 0.4 76.4 ± 0.4 Random 91.6 ± 0.2 91.2 ± 0.2 90.8 ± 0.3 90.5 ± 0.2 89.8 ± 0.2 89.0 ± 0.4 88.4 ± 0.2 87.5 ± 0.3 86.6 ± 0.2 85.6 ± 0.3 84.3 ± 0.4 83.1 ± 0.4 81.6 ± 0.3 79.6 ± 0.4 74.2 ± 6.4 64.7 ± 9.7 56.9 ± 8.5 43.7 ± 12.5 ProsPr 92.3 ± 0.1 92.1 ± 0.0 91.7 ± 0.2 91.5 ± 0.1 91.0 ±0.2 90.5 ± 0.0 90.1 ± 0.1 89.6 ± 0.2 88.5 ± 0.5 87.8 ± 0.1 86.9 ± 0.3 85.5 ± 0.6 84.3 ± 0.2 83.0 ± 0.9 80.8 ± 0.5 79.6 ± 0.7 77.0 ± 0.8 74.2 ± 0.3
Table 5: Numerical results for VGG-16 on CIFAR-10
Sparsity (%) 20.0 36.0 48.8 59.0 67.2 73.8 79.0 83.2 86.6 89.3 91.4 93.1 94.5 95.6 96.5 97.2 97.7 98.2 LTR after Training 93.5 ± 0.1 93.6 ± 0.1 93.6 ± 0.1 93.6 ± 0.1 93.8 ± 0.1 93.6 ± 0.1 93.6 ± 0.1 93.8 ± 0.1 93.8 ± 0.1 93.7 ± 0.1 93.7 ± 0.1 93.8 ± 0.1 93.5 ± 0.2 93.4 ± 0.1 93.2 ± 0.1 93.0 ± 0.2 92.7 ± 0.1 92.1 ± 0.4 Magnitude after Training 93.9 ± 0.2 93.9 ± 0.2 93.8 ± 0.1 93.8 ± 0.1 93.9 ± 0.1 94.0 ± 0.2 93.8 ± 0.1 93.8 ± 0.1 93.9 ± 0.2 93.9 ± 0.2 93.8 ± 0.2 93.7 ± 0.2 93.5 ± 0.1 93.5 ± 0.1 93.3 ± 0.2 93.0 ± 0.1 92.9 ± 0.1 91.7 ± 0.8 Magnitude at Initialization 93.6 ± 0.2 93.4 ± 0.2 93.3 ± 0.1 93.2 ± 0.2 93.3 ± 0.3 93.0 ± 0.1 93.1 ± 0.1 92.9 ± 0.1 92.9 ± 0.2 92.7 ± 0.1 92.5 ± 0.2 92.3 ± 0.1 92.2 ± 0.2 92.0 ± 0.1 91.8 ± 0.2 91.5 ± 0.1 91.3 ± 0.3 90.9 ± 0.2 SNIP 93.6 ± 0.1 93.4 ± 0.1 93.3 ± 0.1 93.4 ± 0.2 93.3 ± 0.2 93.4 ± 0.1 93.1 ± 0.1 93.1 ± 0.1 93.2 ± 0.1 93.1 ± 0.1 92.9 ± 0.1 92.8 ± 0.2 92.8 ± 0.1 92.3 ± 0.2 92.2 ± 0.1 92.1 ± 0.1 91.7 ± 0.1 91.5 ± 0.1 GraSP 93.5 ± 0.1 93.4 ± 0.2 93.5 ± 0.0 93.3 ± 0.1 93.2 ± 0.2 93.3 ± 0.2 93.2 ± 0.1 93.0 ± 0.3 93.0 ± 0.1 92.7 ± 0.2 92.8 ± 0.1 92.4 ± 0.1 92.3 ± 0.1 92.2 ± 0.1 91.9 ± 0.1 91.6 ± 0.2 91.5 ± 0.0 91.2 ± 0.2 SynFlow 93.6 ± 0.2 93.6 ± 0.1 93.5 ± 0.1 93.4 ± 0.1 93.4 ± 0.2 93.5 ± 0.2 93.2 ± 0.1 93.2 ± 0.1 93.1 ± 0.1 92.9 ± 0.1 92.7 ± 0.2 92.5 ± 0.1 92.3 ± 0.1 92.0 ± 0.1 91.8 ± 0.3 91.3 ± 0.1 91.0 ± 0.2 90.6 ± 0.2 Random 93.6 ± 0.3 93.2 ± 0.1 93.2 ± 0.2 93.0 ± 0.2 92.7 ± 0.2 92.4 ± 0.2 92.2 ± 0.1 91.7 ± 0.1 91.2 ± 0.1 90.8 ± 0.2 90.3 ± 0.2 89.6 ± 0.2 88.8 ± 0.2 88.3 ± 0.4 87.6 ± 0.1 86.4 ± 0.2 86.0 ± 0.4 84.5 ± 0.4 ProsPr 93.7 ± 0.2 93.7 ± 0.1 93.9 ± 0.1 93.8 ± 0.1 93.8 ± 0.1 93.5 ± 0.2 93.6 ± 0.1 93.4 ± 0.3 93.5 ± 0.2 93.3 ± 0.1 93.0 ± 0.1 93.0 ± 0.1 92.8 ± 0.3 92.7 ± 0.1 92.6 ± 0.1 92.2 ± 0.1 92.1 ± 0.2 91.6 ± 0.4
Table 6: Numerical results for ResNet-18 on TinyImageNet
Sparsity (%) 20.0 36.0 48.8 59.0 67.2 73.8 79.0 83.2 86.6 89.3 91.4 93.1 94.5 95.6 96.5 97.2 97.7 98.2 LTR after Training 51.7 ± 0.2 51.4 ± 0.3 51.5 ± 0.4 52.1 ± 0.4 51.8 ± 0.4 52.0 ± 0.1 52.0 ± 0.1 52.0 ± 0.2 52.1 ± 0.3 52.0 ± 0.2 52.4 ± 0.2 51.8 ± 0.4 51.8 ± 0.6 51.4 ± 0.4 50.9 ± 0.2 49.3 ± 0.7 48.3 ± 0.7 46.0 ± 0.3 Magnitude after Training 51.7 ± 0.3 51.4 ± 0.1 51.7 ± 0.2 51.5 ± 0.3 51.7 ± 0.4 51.4 ± 0.5 51.1 ± 0.3 51.4 ± 0.4 51.3 ± 0.4 51.1 ± 0.6 51.7 ± 0.3 51.3 ± 0.3 51.8 ± 0.4 51.2 ± 0.3 51.1 ± 0.2 50.4 ± 0.2 49.0 ± 0.2 47.8 ± 0.5 Magnitude at Initialization 51.0 ± 0.3 51.2 ± 0.3 51.0 ± 0.2 50.5 ± 0.5 50.6 ± 0.3 50.0 ± 0.3 50.3 ± 0.2 50.3 ± 0.3 50.0 ± 0.1 49.8 ± 0.5 49.0 ± 0.1 48.3 ± 0.3 47.2 ± 0.2 46.2 ± 0.2 44.4 ± 0.5 42.2 ± 0.1 40.8 ± 0.4 38.1 ± 0.6 SNIP 51.4 ± 0.2 51.5 ± 0.3 51.4 ± 0.3 51.3 ± 0.5 51.6 ± 0.4 51.4 ± 0.5 51.9 ± 0.6 51.5 ± 0.3 51.0 ± 0.2 51.2 ± 0.7 50.6 ± 0.3 50.1 ± 0.3 49.2 ± 0.3 47.8 ± 0.2 46.7 ± 0.1 45.2 ± 0.4 44.5 ± 0.3 42.3 ± 0.3 GraSP 49.8 ± 0.4 49.1 ± 0.3 49.5 ± 0.2 49.5 ± 0.4 49.2 ± 0.1 49.5 ± 0.2 48.7 ± 0.1 49.0 ± 0.5 48.8 ± 0.4 48.3 ± 0.1 48.2 ± 0.1 47.7 ± 0.2 46.5 ± 0.1 45.5 ± 0.7 44.9 ± 0.2 44.1 ± 1.0 42.9 ± 0.5 41.0 ± 0.1 SynFlow 51.8 ± 0.3 51.6 ± 0.3 51.7 ± 0.7 51.8 ± 0.2 51.3 ± 0.4 51.3 ± 0.4 51.5 ± 0.2 51.0 ± 0.4 50.2 ± 0.4 50.4 ± 0.3 49.1 ± 0.0 48.0 ± 0.5 46.7 ± 0.7 45.6 ± 0.0 44.0 ± 0.2 42.2 ± 0.3 40.0 ± 0.1 38.2 ± 0.5 Random 50.6 ± 0.5 50.1 ± 0.2 49.9 ± 0.3 48.7 ± 0.2 48.0 ± 0.4 48.0 ± 0.6 46.4 ± 0.1 45.9 ± 0.5 44.7 ± 0.2 43.6 ± 0.3 42.7 ± 0.2 41.4 ± 0.4 40.2 ± 0.2 37.2 ± 0.2 36.2 ± 0.7 34.0 ± 0.4 32.2 ± 0.5 30.0 ± 0.3 ProsPr 51.8 ± 0.4 51.4 ± 0.7 51.2 ± 0.9 52.0 ± 0.2 51.8 ± 0.1 51.2 ± 0.4 52.0 ± 0.3 51.6 ± 0.7 51.1 ± 0.4 50.7 ± 0.6 50.9 ± 0.3 50.8 ± 1.2 51.1 ± 0.7 50.8 ± 0.5 50.8 ± 0.3 49.6 ± 0.6 49.2 ± 0.2 46.9 ± 0.7
C WALL CLOCK TIME FOR STRUCTURE PRUNING AT INITIALIZATION
When pruning is done at convergence, the benefits of having a compressed model (in terms of memory saving and speed-up) can only be utilized at inference/deployment time. However, with pruning-at-initialization these benefits can be reaped during training as well. This is especially true in the case of structured pruning, where pruning results in weight and convolutional kernels with smaller dimensions (as opposed to unstructured pruning, where we end up with sparse weights with the original dimensions). This means that in addition to memory savings, training take fewer operations which speeds up training. To evaluate the benefits of training at initialization in terms of speed improvements we measured the wall-time training time on an NVIDIA RTX 2080 Ti GPU for the architectures used in Section 4.3 (an additionally on ImageNet dataset). The results in Table 7 show that structured pruning with ProsPr can significantly reduce the overall training time.
D RESULTS ON SEGMENTATION TASK
An interesting, albeit less common, application for pruning models is within the context of segmentation. In a recent paper Jeong et al. (2021) train and prune the U-Net (Ronneberger et al., 2015) architecture on two image datasets from the Cell Tracking Challenge (PhC-C2DH-U373 and DICC2DH-HeLa). They use the classic multi-step approach of gradually applying magnitude-pruning interleaved with fine-tuning stages. To evaluate the flexibility of our method we used meta-gradients at the beginning of training (on a randomly initialized U-Net), prune in a single shot, and train the network once for the same number of epochs (50). We kept the training set-up the same as the baseline by Jeong et al. (2021) (i.e., resizing images and segmentation maps to (256,256), setting aside 30% of training data for validation) and similarly aim to find the highest prune ratio that does not result in IOU degradation. We report intersection-over-union (IOU) metric for the two datasets in Tables 8 and 9:
Table 8: Mean-IOU on U373 validation
Method Prune Ratio Mean IOU
Unpruned - 0.9371 Jeong et al. 95% 0.9368 ProsPr 97% 0.9369
Table 9: Mean-IOU on HeLa validation
Method Prune Ratio Mean IOU
Unpruned - 0.7514 Jeong et al. 81.8% 0.7411 ProsPr 90% 0.7491
These results show that our method works as well (or better) compared to this compute-expensive baseline, in the sense that we can prune more parameters while keeping the IOU score the same.
E SELF-SUPERVISED INITIALIZATION
To evaluate the robustness and consistency of our method against non-random initialization we ran experiments using BYOL to learn representations from unlabeled samples (Grill et al., 2020). We used ResNet-18 as a backbone and trained for 1000 epochs with an embedding size of 64. Unlike the vanilla ResNet-18 architecture used in Section 4.3 we used the commonly-used modified version of ResNet-18 for smaller inputs (removing the first pooling layer and modifying the first convolutional layer to have kernel kernel size of 3, stride of 1, and padding size of 1). We then used this trained ResNet18 as the initialization for our meta-gradient pruning method. After the pruning step, all layers were trained as before until convergence. All training hyper-parameters were kept as before. The results (final test accuracies for 95% pruning) are summarized in Table 10.
These results show the robustness of our method for this particular self-supervised initialization. Starting from a learned representation can be challenging because these representations are much closer to weight values at convergence, and therefore the magnitude of their gradients is significantly smaller than randomly initialized weights. However, this is less of a problem for meta-gradients as their magnitude is still significant due to back-propagation through training steps. This can be seen in Figure 3 which shows the L2 norm of gradients of each layer of a BYOL-initialized ResNet-18 for meta-gradients compared to normal gradients. It can be seen that meta-gradients provide a stronger signal compared to normal gradients. | 1. What is the focus of the paper regarding weight pruning?
2. What are the strengths of the proposed approach, particularly in its simplicity and effectiveness?
3. What are the weaknesses of the paper, especially regarding its writing and experimental results?
4. How does the reviewer assess the connection between meta-gradients and the motivation behind the proposed method? | Summary Of The Paper
Review | Summary Of The Paper
This work focuses on weight pruning at initialization. In this paper, the authors point out an important problem that the pruned subnetwork at initialization is going to be trained and previous prune-at-init methods ignore this fact. As a result, these prune-at-init methods ignore the trainability of weights. This paper proposes to use meta-gradients through the first few steps of optimization to determine which weights to prune. Experimental results show that ProsPr (this paper) achieves state-of-the-art pruning performance.
Review
[Strengths]
The proposed ProsPr is a simple and effective pruning method that prunes weights at initialization. The authors propose to use meta-gradients to compute saliency scores when pruning weights.
The higher-order temrs in meta-gradients can be further dropped (Equ. 16) such that saving the initial weights
w
i
n
i
t
is enough when computing meta-gradients.
Experimental results show that ProsPr achieves state-of-the-art pruning performance.
[Weaknesses]
Writing needs improvement. E.g., "Many methods attempts...", "...the original, unpruned, model" and "Previous works that prune at the start have training have not been...".
Although experimental results show that using estimation over several steps of gradient descent improves pruning performance, the connection between meta-gradients and motivation (i.e., the trainability of weights) is not strong and convincing enough. |
ICLR | Title
Prospect Pruning: Finding Trainable Weights at Initialization using Meta-Gradients
Abstract
Pruning neural networks at initialization would enable us to find sparse models that retain the accuracy of the original network while consuming fewer computational resources for training and inference. However, current methods are insufficient to enable this optimization and lead to a large degradation in model performance. In this paper, we identify a fundamental limitation in the formulation of current methods, namely that their saliency criteria look at a single step at the start of training without taking into account the trainability of the network. While pruning iteratively and gradually has been shown to improve pruning performance, explicit consideration of the training stage that will immediately follow pruning has so far been absent from the computation of the saliency criterion. To overcome the short-sightedness of existing methods, we propose Prospect Pruning (ProsPr), which uses meta-gradients through the first few steps of optimization to determine which weights to prune. ProsPr combines an estimate of the higherorder effects of pruning on the loss and the optimization trajectory to identify the trainable sub-network. Our method achieves state-of-the-art pruning performance on a variety of vision classification tasks, with less data and in a single shot compared to existing pruning-at-initialization methods. Our code is available online at https://github.com/mil-ad/prospr.
1 INTRODUCTION
Pruning at initialization—where we remove weights from a model before training begins—is a recent and promising area of research that enables us to enjoy the benefits of pruning at training time, and which may aid our understanding of training deep neural networks.
Frankle & Carbin (2019) provide empirical evidence for the existence of sparse sub-networks that can be trained from initialization and achieve accuracies comparable to the original network. These “winning tickets” were originally found in an iterative process where, in each iteration, the network is trained to full convergence followed by pruning a subset of the weights by magnitude. The values of the remaining weights are then rewound to their value at initialization, and the process is repeated iteratively until the desired sparsity level is achieved.
This process, known as Lottery Ticket Rewinding (LTR), is very compute-intensive and is prone to failures. For instance, Frankle et al. (2020) show better results by rewinding weights not all the way back to initialization, but to early stages of training instead. LTR is especially prone to failure for more difficult problems (e.g., training on ImageNet), where we must rewind weights to their state several epochs into training.
A recent line of work proposes alternative practical solutions to identify these sub-networks before training begins, without the cost of retraining the network iteratively Lee et al. (2018); Wang et al. (2020); de Jorge et al. (2021); Tanaka et al. (2020). This class of methods uses gradients to assess
∗Corresponding author. Contact at milad.alizadeh@cs.ox.ac.uk
the importance of neural network weights. These gradients are often known as Synaptic Saliencies and are used to estimate the effect of pruning a single parameter in isolation on various objectives, typically the loss function. This objective is not so different from classical pruning-at-convergence methods, but the gradients for a well-trained model are small; therefore these methods must inspect higher-order metrics such as the Hessian to estimate the pruning effect (LeCun et al., 1990; Hassibi & Stork, 1993). Pruning at initialization is desirable because the benefits of pruning (in terms of memory and speed) can be reaped during training, rather than only at inference/deployment time.
However, the performance of prune-at-init methods remains poor: the degradation in accuracy is still significant compared to training the full model and LTR, making these methods impractical for many real-world problems (Frankle et al., 2021). In this paper, we identify a fundamental limitation in the objective formulation of current methods, namely that saliency criteria do not take into account the fact that the model is going to be trained after the pruning step. If our aim was to simply prune a subset of weights without affecting the loss, then these saliency criteria are estimating the correct objective. However, this estimate does not take into account that we are going to train the weights after we prune them. We need a metric that captures the trainability of the weights during the optimization steps, rather than a single myopic estimate.
Many methods attempt to overcome this by pruning gradually and/or adding training steps between iterative pruning steps (Zhu & Gupta, 2018; You et al., 2020; de Jorge et al., 2021). Although this approach has been shown to be effective, it is expensive and cumbersome in practice and ultimately is an indirect approximation to the trainability criteria we are looking to incorporate into our objective.
In this paper, we propose Prospect Pruning (ProsPr), a new pruning-at-init method that learns from the first few steps of optimization which parameters to prune. We explicitly formulate our saliency criteria to account for the fact that the network will be trained after pruning. More precisely, ProsPr uses meta-gradients by backpropagating through the first few model updates in order to estimate the effect the initial pruning parameters have on the loss after a few gradient descent steps. Effectively this enables us to account for both higher-order effects of pruning weights on the loss, as well as the trainability of individual weights. Similar to other methods we apply pruning to initialization values of weights and train our models from scratch. In summary, our contributions are:
• We identify a key limitation in prior saliency criteria for pruning neural networks—namely that they do not explicitly incorporate trainability-after-pruning into their criteria.
• We propose a new pruning-at-init method, ProsPr, that uses meta-gradients over the first few training steps to bridge the gap between pruning and training.
• We show empirically that ProsPr achieves higher accuracy compared to existing pruningat-init methods. Unlike other methods, our approach is single shot in the sense that the pruning is applied to the network initial weights in a single step.
2 BACKGROUND
In this section we review the key concepts that our method builds upon. We delay comparisons to other pruning techniques in the literature to Section 5.
Classic post-training pruning methods aim to identify and remove network weights with the least impact on the loss (LeCun et al., 1990; Hassibi & Stork, 1993). They typically use the Taylor expansion of the loss with respect to parameters to define a saliency score for each parameter: δL ≈ ∇θL⊤δθ + 12δθ
⊤H δθ, where H = ∇2θL is the Hessian matrix. When the network has converged, the first-order term in the expansion is negligible, and hence these methods resort to using H.
Lee et al. (2018) introduce SNIP, and show that the same objective of minimizing the change in loss can be used at initialization to obtain a trainable pruned network. At initialization, the first-order gradients ∇θ in the local quadratic approximation are still significant, so higher-order terms can be ignored. Hence the computation of the parameter saliencies can be done using backpropagation.
The Taylor expansion approximates the effect of small additive perturbations to the loss. To better approximate the effect of removing a weight, Lee et al. (2018) attach a multiplicative all-one mask to the computation graph of each weight. This does not change the forward-pass of the network, but it enables us to form the Taylor expansion around the mask values, rather than the weights, to estimate the effect of changing the mask values from 1 to 0. More specifically, SNIP computes the
saliency scores according to:
sj = |gj(w,D)|∑m k=1 |gk(w,D)| , (1)
with
gj(w,D) = ∂L(c⊙w,D)
∂cj , (2)
where m is the number of weights in the network, c ∈ {0, 1}m is the pruning mask (initialised to 1 above), D is the training dataset, w are the neural network weights, L is the loss function, and ⊙ is the Hadamard product. These saliency scores are computed before training the network, using one (or more) mini-batches from the training set. The global Top-K weights with the highest saliency scores are retained (cj = 1), and all other weights are pruned (cj = 0), before the network is trained.
Our method, to be introduced in Section 3, also relies on computing the saliency scores for each mask element, but uses a more sophisticated loss function to incorporate the notion of trainability.
3 OUR METHOD: PROSPR
In this section we introduce our method, Prospect Pruning (ProsPr). We note that for the problem of pruning at initialization, the pruning step is immediately followed by training. Therefore, pruning should take into account the trainability of a weight, instead of only its immediate impact on the loss before training. In other words, we want to be able to identify weights that are not only important at initialization, but which may be useful for reducing the loss during training. To this end, we propose to estimate the effect of pruning on the loss over several steps of gradient descent at the beginning of training, rather than the changes in loss at initialization.
More specifically, ProsPr models how training would happen by performing multiple (M) iterations of backpropagation and weight updates—like during normal training. We can then backpropagate through the entire computation graph, from the loss several steps into training, back to the original mask, since the gradient descent procedure is itself differentiable. Once the pruning mask is computed, we rewind the weights back to their values at initialization and train the pruned network. The gradient-of-gradients is called a meta-gradient. This algorithm is illustrated visually in Figure 1.
The higher-order information in the meta-gradient includes interactions between the weights during training. When pruning at initialization, our ultimate goal is to pick a pruned model, A, which is more trainable than an alternative pruned model B. That means we want the loss L(ŷA, y) to be lower than L(ŷB , y) at convergence (for a fixed pruning ratio). Finding the optimal pruning mask is generally infeasible since the training horizon is long (i.e., evaluation is costly) and the space of possible pruning masks is large. Unlike other methods that must compute the saliency scores iteratively, we can use the meta-gradients to compute the pruning mask in one shot. This picks a line in loss-space, which more closely predicts the eventual actual loss. This is because it smooths out over more steps, and takes into account interactions between weights in the training dynamics. Crucially, in the limit of large M, the match to the ultimate objective is exact.
3.1 SALIENCY SCORES VIA META-GRADIENTS
We now introduce ProsPr formally. After initialising the network weights randomly to obtain winit, we apply a weight mask to the initial weights,
w0 = c⊙winit. (3)
This weight mask contains only ones, c = 1, as in SNIP (Lee et al., 2018), and represents the connectivity of the corresponding weights.
We then sample M+1 batches of data Di ∼ Dtrain (i ∈ {0, . . . ,M}; M ≥ 1) for the pruning step, and perform M weight updates1,
w1 = w0 − α∇w0L(w0,D0) (4) ...
wM = wM−1 − α∇wM−1L(wM−1,DM−1). (5) Then, we compute a meta-gradient that backpropagates through these updates. Specifically, we compute the gradient of the final loss w.r.t. the initial mask,
∇c L(wM ,DM ). (6) Using the chain rule, we can write out the form of the meta-gradient beginning from the last step:
∇cL(wM ,D) = ∇wML(wM ,D)(∇cwM ), (7) repeating for each step until we reach the zero’th step whose gradient is trivial,
= ∇wML(wM ,D)(∇wM−1wM ) . . . (∇w0w1)(∇cw0) (8) = ∇wML(wM ,D)(∇wM−1wM ) . . . (∇w0w1)(∇c(c⊙winit)) (9)
= ∇wML(wM ,D)
[ M∏
m=1
(∇wm−1wm) ] winit. (10)
In practice, we can compute the meta-gradients by relying on automatic differentiation software such as PyTorch (Paszke et al., 2019). However, care must be taken to ensure that weights at each step are kept in memory so that the entire computation graph, including gradients, is visible to the automatic differentiation software. The saliency scores are now given by
sj = |gj(w,D)|∑m k=1 |gk(w,D)| , (11)
with
gj(w,D) = ∂L(wM ,D)
∂cj , (12)
where wM is a function of c. Equation (12) stands in contrast to SNIP, where the saliency is computed using the loss at c ·winit rather than wM . The saliency scores are then used to prune the initial weights winit: the ones with the highest saliency scores are retained (cj = 1), and all other weights are pruned (cj = 0). Finally, the network is trained with the pruned weights ŵinit.
Algorithm 1 summarises the proposed method, ProsPr. 1We formalise the weight updates using vanilla SGD here; in practice these may be different when using approaches such as momentum or BatchNorm (Ioffe & Szegedy, 2015). Since our implementation relies on automatic differentiation in PyTorch (Paszke et al., 2019), we can use any type of update, as long as it is differentiable w.r.t. the initial mask c.
Algorithm 1 ProsPr Pseudo-Code 1: Inputs: a training dataset Dtrain, number of initial training steps M , number of main training steps N
(M ≪ N ), learning rate α 2: Initialise: network weights winit
3: cinit = 1 ▷ Initialise mask with ones 4: w0 = cinit ⊙winit ▷ Apply mask to initial weights 5: for k = 0, . . . ,M − 1 do 6: Dk ∼ Dtrain ▷ Sample batch of data 7: wi+1 = wi − α∇wL(wi,Dk) ▷ Update network weights 8: end for
9: gj(w,D) = ∂L(wM ,D)/∂cj ▷ Compute meta-gradient
10: sj = |gj(w,D)|∑m
k=1 |gk(w,D)|
▷ Compute saliency scores
11: Determine the k-th largest element in s, sk.
12: cprune = { 1, if cj ≥ sk 0, otherwise
▷ Set pruning mask
13: ŵ0 = cprune ⊙winit ▷ Apply mask to initial weights winit
14: for i = 1, . . . , N do ▷ Train pruned model 15: ŵi+1 = ŵi − α∇wL(ŵi,D) 16: end for
3.2 FIRST-ORDER APPROXIMATION
Taking the meta-gradient through many model updates (Equation (6)) can be memory intensive: in the forward pass, all gradients of the individual update steps need to be retained in memory to then be able to backpropagate all the way to the initial mask. However, we only need to perform a few steps2 at the beginning of training so in practice we can perform the pruning step on CPU which usually has access to more memory compared to a GPU. We apply this approach in our own experiments, with overheads of around 30 seconds being observed for the pruning step.
Alternatively, when the number of training steps needs to be large we can use the following firstorder approximation. Using Equation (10), the meta-gradient is:
∇cL(wM ,DM ) = ∇wML(wM ,DM )
[ M∏
m=1
(∇wm−1wm) ] winit, (13)
writing wm in terms of wm−1 following SGD,
= ∇wML(wM ,DM )
[ M∏
m=1
∇wm−1(wm−1 − α∇wm−1L(wm−1;Dm)) ] winit,
(14) carrying through the partial derivative,
= ∇wML(wM ,DM )
[ M∏
m=1
I − α∇2wm−1L(wm−1;Dm) ] winit, (15)
and finally dropping small terms for sufficiently small learning rates,
≈ ∇wML(wM ,DM )
[ M∏
m=1
I ] winit, (16)
= ∇wML(wM ,DM ) winit. (17) In the second-to-last step, we drop the higher-order terms, which gives us a first-order approximation of the meta-gradient3.
2We use 3 steps for experiments on CIFAR-10, CIFAR-100 and TinyImageNet datasets 3Note that this approximation also works for optimisers other than vanilla SGD (e.g., Adam, Adamw, Ad-
abound), except that the term which is dropped (r.h.s. of Equation Equation (15)) looks slightly different.
With this approximation, we only need to save the initial weight vector winit in memory and multiply it with the final gradient. This approximation can be crude when the Laplacian terms are large, but with a sufficiently small learning rate it becomes precise. The approximation allows us to take many more intermediate gradient-steps which can be beneficial for performance when the training dataset has many classes, as we will see in Section 4.2.
4 EXPERIMENTS
We empirically evaluate the performance of our method, ProsPr, compared to various vision classification baselines across different architectures and datasets. In supplementary sections we show effectiveness of our method on image segmentation tasks (Appendix D) and when using selfsupervised initialization (Appendix E). We provide details of our hyper-parameters, experiment setup, and implementation details in Appendix A.
4.1 RESULTS ON CIFAR AND TINY-IMAGENET
In recent work, Frankle et al. (2021) extensively study and evaluate different pruning-at-initialization methods under various effects such as weight re-initialization, weight shuffling, and score inversion. They report the best achievable results by these methods and highlight the gap between their performance and two pruning-at-convergence methods, weight rewinding and magnitude pruning (Renda et al., 2020; Frankle et al., 2020).
In Figure 2 we evaluate ProsPr on this benchmark using ResNet-20 and VGG-16 on CIFAR-10, and ResNet-18 on Tiny-ImageNet. It can be seen that ProsPr reduces the performance gap, especially at higher sparsity levels, and in some cases exceeds the accuracy of pruning-after-convergence methods. Full results are also summarised in Appendix B.
This is a remarkable achievement: ProsPr is the first work to close the gap to methods that prune after training. Previous works that prune at the start have not been able to outperform methods that prune after training on any settings, including smaller datasets such as CIFAR-10 or Tiny-ImageNet. It is also important to note that other baselines that have comparable accuracies are all iterative methods. ProsPr is the only method that can do this in a single shot after using only 3 steps batch-sizes of 512
in the inner-loop before computing the meta-gradients. In total, we only use 4 batches of data. We also do not do any averaging of scores by repeating the method multiple times.
The performance in these small datasets comes from the fact that ProsPr computes higher-order gradients. While there are other iterative methods that can work without any data, their effect is mostly a more graceful degradation at extreme pruning ratios as opposed to best accuracy at more practical sparsity levels. One example is SynFlow which is similar to FORCE but uses an all-one input tensor instead of samples from the training set (Tanaka et al., 2020).
4.2 RESULTS ON IMAGENET DATASET
To evaluate the performance of ProsPr on more difficult tasks we run experiments on the larger ImageNet dataset. Extending gradient-based pruning methods to this dataset poses several challenges.
Number of classes In synaptic-saliency methods, the mini batches must have enough examples from all classes in the dataset. Wang et al. (2020) recommend using class-balanced mini-batches sized ten times the number of classes. In datasets with few classes this is not an issue and even a single batch includes multiple examples per class. This is one reason why methods like SNIP work with a single batch, and why we kept the number of steps in ProsPr’s inner loop fixed to only 3. ImageNet however has 1,000 classes, and using a single or a handful of small batches is inadequate. Previous methods such as FORCE, GraSP, or SynFlow avoid this problem by repeating the algorithm with new data batches and averaging the saliency scores. In ProsPr we instead increase the number of updates before computing the meta-gradients, ensuring they flow through enough data. Computing meta-gradients through many steps however poses new challenges.
Gradient degradation We start to see gradient stability issues when computing gradients over deep loops. Gradient degradation problems, i.e., vanishing and exploding gradients, have also been observed in other fields that use meta-gradients such as Meta-Learning. Many solutions have been proposed to stabilize gradients when the length of loop increases beyond 4 or 5 steps, although this remains an open area of research (Antoniou et al., 2019).
Computation Complexity For ImageNet we must make the inner loop hundreds of steps deep to achieve balanced data representation. In addition to stability issues, backpropagating through hundreds of steps is very compute intensive.
Therefore for our experiments on ImageNet we use the first-order approximation of ProsPr (Sec 3.2). We evaluate ProsPr using ResNet-50 and VGG-19 architectures and compare against state-of-the-art methods FORCE and Iter-SNIP introduced by de Jorge et al. (2021). We include multi-batch versions of SNIP and GraSP (SNIP-MB and GraSP-MB) to provide a fair comparison to iterative methods, which partially prune several times during training, in terms of the amount of data presented to the method. We use 1024 steps with a batch size of 256 (i.e. 262,144 samples) for ResNet-50. For VGG-19, a much larger model, and which requires more GPU memory we do 256 steps with batch size of 128. This is still far fewer samples than other methods. Force, for example, gradually prunes
in 60 steps, where each step involves computing and averaging scores over 40 batches of size 256, i.e. performing backpropagation 2400 times and showing 614,400 samples to the algorithm.
Table 1 shows our results compared to the baselines reported by de Jorge et al. (2021). First-order ProsPr exceeds previous results in all configurations except one, where it is outperformed by GraSP. Note the surprisingly good performance of random pruning of ResNets, which was also observed by de Jorge et al. (2021). This could be explained by the fact that VGG-19 is a much larger architecture with 143.6 million parameters, compared to 15.5 million in ResNet-50s. More specifically the final three dense layers of VGG-19 constitute 86% of its total prunable parameters. The convolution layers of VGG constitute only 14% of the prunable weights. Pruning methods are therefore able to keep more of the convolution weights and instead prune extensively from the over-parametrized dense layers. ResNet architectures on the hand have a single dense classifier at the end.
4.3 STRUCTURED PRUNING
We also evaluate ProsPr in the structured pruning setup where instead of pruning individual weights, entire channels (or columns of linear layers) are removed. This is a more restricted setup, however it offers memory savings and reduces the computational cost of training and inference.
Adopting ProsPr for structured pruning is as simple as changing the shape of the pruning mask c in Eq 3 to have one entry per channel (or column of the weight matrix). We evaluate our method against 3SP, a method that extends SNIP to structured pruning (van Amersfoort et al., 2020). Our results are summarized in Table 2 which show accuracy improvements in all scenarios. In Appendix C we also evaluate wall-clock improvements in training time as a result of structured pruning at initialization.
4.4 NUMBER OF META STEPS
Finally, we evaluate ProsPr when using a varying number of meta steps, which gives insight into whether using meta-gradients is beneficial. We repeated experiments from Section 4.3 but this time we vary the depth of training steps between 0 and 3. The results in Table 3 show that the final accuracy consistently increases as we increase the depth of the training, showing the effectiveness of meta-gradients. We used the same data batch in all M training steps to isolate the effect of M, while in other experiments we use a new batch in every step.
In theory increasing the number of training steps should always help and match the ultimate objective (estimating the loss after many epochs of training) in the limit. However, in practice increasing the number of steps beyond 3 poses a lot of gradient stability issues (and is computationally expensive). These issues have been also identified in the meta-learning literature (Antoniou et al., 2019).
5 RELATED WORK
Pruning at initialization Several works extend the approach proposed by Lee et al. (2018). de Jorge et al. (2021) evaluate SNIP objective in a loop in which pruned parameters still receive gradients and therefore have a chance to get un-pruned. The gradual pruning helps avoid the layercollapse issue, and their method, known as FORCE, achieves better performance at extreme sparsity levels. Tanaka et al. (2020) provide theoretical justification for why iteratively pruning helps with the layer-collapse issue and propose a data-free version of the method where an all-one input tensor is used instead of real training data. Wang et al. (2020) propose an alternative criterion to minimizing changes in the loss and instead argue for preserving the gradient flow. Their method, GraSP, keeps weights that contribute most to the norm of the gradients. van Amersfoort et al. (2020) extends SNIP and GraSP to structured pruning to make training and inference faster. They further augment the scores by their compute cost to push the pruning decision towards more FLOPS reduction.
Gradual pruning As discussed in Section 1, in existing methods the training step has been absent from the saliency computation step. As a workaround, many methods make their approaches training-aware by applying pruning gradually and interleaving it with training: Zhu & Gupta (2018) proposed an exponential schedule for pruning-during-training and Gale et al. (2019) showed its effectiveness in a broader range of tasks. Frankle & Carbin (2019) show that weight rewinding achieves better results when done in multiple prune-retrain steps. Lym et al. (2019) continuously apply structured pruning via group-lasso regularization while at the same time increasing batch sizes. You et al. (2020) find pruned architectures after a few epochs of training-and-pruning and monitoring a distance metric.
Meta-Gradients Backpropagation through gradients, and its first-order approximation, is also used in model-agnostic meta-learning literature (Finn et al., 2017; Zintgraf et al., 2019) where the objective is to find a model that can be adapted to new data in a few training steps. Similar to our setup, the meta-loss captures the trainability of a model, but additionally, the meta-gradients are used to update the network’s weights in a second loop. In self-supervised learning setting, Xiao et al. (2021) use meta-gradients to explicitly optimize a learn-to-generalize regularization term in nested meta-learning loops. Computing gradients-of-gradients is also used to regularize loss with a penalty on the gradients, for instance, to enforce Lipschitz continuity on the network (Gulrajani et al., 2017) or to control different norms of the gradients (Alizadeh et al., 2020).
6 DISCUSSION
Although pruning at initialization has the potential to greatly reduce the cost of training neural networks, existing methods have not lived up to their promise. We argue that this is, in part, because they do not account for the fact that the pruned network is going to be trained after it is pruned. We take this into account, using a saliency score that captures the effect of a pruning mask on the training procedure. As a result, our method is competitive not just with methods that prune before training, but also with methods that prune iteratively during training and those that prune after training. In principle, compressing neural networks at initialization has the potential to reduce energy and environmental costs of machine learning. Beyond our context, taking into account that methods which prune-at-convergence generally have to be fine-tuned, it is possible that our work could have further implications for these pruning methods as well (Molchanov et al., 2016; Wang et al., 2019).
ACKNOWLEDGMENTS
Milad Alizadeh is grateful for funding by the EPSRC (grant references EP/R512333/1) and Arm (via NPIF 2017 studentship). Shyam Tailor is supported by EPSRC grants EP/M50659X/1 and EP/S001530/1 (the MOA project) and the European Research Council via the REDIAL project (Grant Agreement ID: 805194). Luisa Zintgraf is supported by the 2017 Microsoft Research PhD Scholarship Program, and the 2020 Microsoft Research EMEA PhD Award. Joost van Amersfoort is grateful for funding by the EPSRC (grant reference EP/N509711/1) and Google-DeepMind. Sebastian Farquhar is supported by the EPSRC via the Centre for Doctoral Training in Cybersecurity at the University of Oxford as well as Christ Church, University of Oxford.
A EXPERIMENTAL SETUP
A.1 ARCHITECTURE DETAILS
We use standard VGG and ResNet models provided by torchvision throughout this work where possible. The ResNet-20 model, which is not commonly evaluated, was implemented to match the version used by Frankle et al. (2021) so that we could compare using the benchmark supplied by this paper.
For smaller datasets, it is common to patch models defined for ImageNet. Specifically, for ResNets, we replace the first convolution with one 3 × 3 filter size, and stride 1; the first max-pooling layer is replaced with an identity operation. For VGG, we follow the convention used by works such as FORCE (de Jorge et al., 2021). We do not change any convolutional layers, but we change the classifier to use a single global average pooling layer, followed by a single fully-connected layer.
A.2 TRAINING DETAILS
For CIFAR-10, CIFAR-100 and TinyImageNet we perform 3 meta-steps to calculate our saliency criteria. We train the resulting models for 200 epochs, with initial learning rate 0.1; we divide the learning rate by 10 at epochs 100 and 150. Weight decay was set to 5×10−4. Batch size for CIFAR10, CIFAR-100, and TinyImageNet was 256. For CIFAR-10 and CIFAR-100 we augment training data by applying random cropping (32× 32, padding 4), and horizontal flipping. For TinyImageNet we use the same procedure, with random cropping parameters set to 64× 64, padding 4. For ImageNet we train models for 100 epochs, with an initial learning rate of 0.1; we divide the learning rate by 10 at epochs 30, 60 and 90. Weight decay was set to 1× 10−4. Batch size was 256. We use the first order approximation to do pruning, and use 1024 steps for ResNet-50. For VGG-19 we use 2048 steps, but with batch size set to 128 (due to memory limitations, as our implementation only utilized a single GPU for meta-training). We apply random resizing, then crop the image to 224× 224, with horizontal flipping.
A.3 IMPLEMENTATIONS
In addition to our code, the reader may find it useful to reference the following repos from related work. Our experiments were performed using code derived from these implementations:
B NUMBERS FROM FIGURE 2
Sparsity (%) 20.0 36.0 48.8 59.0 67.2 73.8 79.0 83.2 86.6 89.3 91.4 93.1 94.5 95.6 96.5 97.2 97.7 98.2 LTR after Training 91.8 ± 0.2 91.9 ± 0.2 91.9 ± 0.2 91.7 ± 0.2 91.5 ± 0.1 91.4 ± 0.1 91.1 ± 0.1 90.6 ± 0.1 90.1 ± 0.0 89.2 ± 0.1 88.0 ± 0.2 86.8 ± 0.2 85.7 ± 0.1 84.4 ± 0.2 82.8 ± 0.1 81.2 ± 0.3 79.4 ± 0.3 77.3 ± 0.5 Magnitude after Training 92.2 ± 0.3 92.0 ± 0.2 92.0 ± 0.2 91.7 ± 0.1 91.5 ± 0.2 91.3 ± 0.2 91.1 ± 0.2 90.7 ± 0.2 90.2 ± 0.2 89.4 ± 0.2 88.7 ± 0.2 87.7 ± 0.2 86.5 ± 0.2 85.2 ± 0.2 83.5 ± 0.3 81.9 ± 0.3 80.4 ± 0.2 77.7 ± 0.4 Magnitude at Initialization 91.5 ± 0.2 91.2 ± 0.1 90.8 ± 0.1 90.7 ± 0.2 90.2 ± 0.1 89.8 ± 0.2 89.3 ± 0.2 88.6 ± 0.2 87.9 ± 0.3 87.0 ± 0.3 86.1 ± 0.2 85.2 ± 0.4 83.9 ± 0.2 82.5 ± 0.4 80.7 ± 0.5 79.1 ± 0.4 77.2 ± 0.4 74.5 ± 0.7 SNIP 91.8 ± 0.2 91.2 ± 0.3 90.9 ± 0.1 90.7 ± 0.1 90.1 ± 0.2 89.7 ± 0.3 89.0 ± 0.2 88.5 ± 0.3 87.7 ± 0.2 87.2 ± 0.4 85.8 ± 0.1 84.7 ± 0.5 83.8 ± 0.3 82.5 ± 0.4 80.9 ± 0.2 79.1 ± 0.2 77.3 ± 0.2 74.0 ± 0.5 GraSP 91.5 ± 0.1 91.3 ± 0.2 91.2 ± 0.1 90.6 ± 0.2 90.3 ± 0.2 89.6 ± 0.1 89.1 ± 0.2 88.4 ± 0.2 87.9 ± 0.1 87.0 ± 0.2 85.9 ± 0.1 85.1 ± 0.4 83.9 ± 0.4 82.8 ± 0.2 81.2 ± 0.2 79.7 ± 0.3 78.0 ± 0.3 76.0 ± 0.5 SynFlow 91.7 ± 0.1 91.3 ± 0.2 91.2 ± 0.1 90.8 ± 0.1 90.4 ± 0.2 89.8 ± 0.1 89.5 ± 0.3 88.9 ± 0.4 88.1 ± 0.1 87.4 ± 0.5 86.1 ± 0.2 85.4 ± 0.2 84.3 ± 0.2 82.9 ± 0.2 81.7 ± 0.2 80.0 ± 0.3 78.6 ± 0.4 76.4 ± 0.4 Random 91.6 ± 0.2 91.2 ± 0.2 90.8 ± 0.3 90.5 ± 0.2 89.8 ± 0.2 89.0 ± 0.4 88.4 ± 0.2 87.5 ± 0.3 86.6 ± 0.2 85.6 ± 0.3 84.3 ± 0.4 83.1 ± 0.4 81.6 ± 0.3 79.6 ± 0.4 74.2 ± 6.4 64.7 ± 9.7 56.9 ± 8.5 43.7 ± 12.5 ProsPr 92.3 ± 0.1 92.1 ± 0.0 91.7 ± 0.2 91.5 ± 0.1 91.0 ±0.2 90.5 ± 0.0 90.1 ± 0.1 89.6 ± 0.2 88.5 ± 0.5 87.8 ± 0.1 86.9 ± 0.3 85.5 ± 0.6 84.3 ± 0.2 83.0 ± 0.9 80.8 ± 0.5 79.6 ± 0.7 77.0 ± 0.8 74.2 ± 0.3
Table 5: Numerical results for VGG-16 on CIFAR-10
Sparsity (%) 20.0 36.0 48.8 59.0 67.2 73.8 79.0 83.2 86.6 89.3 91.4 93.1 94.5 95.6 96.5 97.2 97.7 98.2 LTR after Training 93.5 ± 0.1 93.6 ± 0.1 93.6 ± 0.1 93.6 ± 0.1 93.8 ± 0.1 93.6 ± 0.1 93.6 ± 0.1 93.8 ± 0.1 93.8 ± 0.1 93.7 ± 0.1 93.7 ± 0.1 93.8 ± 0.1 93.5 ± 0.2 93.4 ± 0.1 93.2 ± 0.1 93.0 ± 0.2 92.7 ± 0.1 92.1 ± 0.4 Magnitude after Training 93.9 ± 0.2 93.9 ± 0.2 93.8 ± 0.1 93.8 ± 0.1 93.9 ± 0.1 94.0 ± 0.2 93.8 ± 0.1 93.8 ± 0.1 93.9 ± 0.2 93.9 ± 0.2 93.8 ± 0.2 93.7 ± 0.2 93.5 ± 0.1 93.5 ± 0.1 93.3 ± 0.2 93.0 ± 0.1 92.9 ± 0.1 91.7 ± 0.8 Magnitude at Initialization 93.6 ± 0.2 93.4 ± 0.2 93.3 ± 0.1 93.2 ± 0.2 93.3 ± 0.3 93.0 ± 0.1 93.1 ± 0.1 92.9 ± 0.1 92.9 ± 0.2 92.7 ± 0.1 92.5 ± 0.2 92.3 ± 0.1 92.2 ± 0.2 92.0 ± 0.1 91.8 ± 0.2 91.5 ± 0.1 91.3 ± 0.3 90.9 ± 0.2 SNIP 93.6 ± 0.1 93.4 ± 0.1 93.3 ± 0.1 93.4 ± 0.2 93.3 ± 0.2 93.4 ± 0.1 93.1 ± 0.1 93.1 ± 0.1 93.2 ± 0.1 93.1 ± 0.1 92.9 ± 0.1 92.8 ± 0.2 92.8 ± 0.1 92.3 ± 0.2 92.2 ± 0.1 92.1 ± 0.1 91.7 ± 0.1 91.5 ± 0.1 GraSP 93.5 ± 0.1 93.4 ± 0.2 93.5 ± 0.0 93.3 ± 0.1 93.2 ± 0.2 93.3 ± 0.2 93.2 ± 0.1 93.0 ± 0.3 93.0 ± 0.1 92.7 ± 0.2 92.8 ± 0.1 92.4 ± 0.1 92.3 ± 0.1 92.2 ± 0.1 91.9 ± 0.1 91.6 ± 0.2 91.5 ± 0.0 91.2 ± 0.2 SynFlow 93.6 ± 0.2 93.6 ± 0.1 93.5 ± 0.1 93.4 ± 0.1 93.4 ± 0.2 93.5 ± 0.2 93.2 ± 0.1 93.2 ± 0.1 93.1 ± 0.1 92.9 ± 0.1 92.7 ± 0.2 92.5 ± 0.1 92.3 ± 0.1 92.0 ± 0.1 91.8 ± 0.3 91.3 ± 0.1 91.0 ± 0.2 90.6 ± 0.2 Random 93.6 ± 0.3 93.2 ± 0.1 93.2 ± 0.2 93.0 ± 0.2 92.7 ± 0.2 92.4 ± 0.2 92.2 ± 0.1 91.7 ± 0.1 91.2 ± 0.1 90.8 ± 0.2 90.3 ± 0.2 89.6 ± 0.2 88.8 ± 0.2 88.3 ± 0.4 87.6 ± 0.1 86.4 ± 0.2 86.0 ± 0.4 84.5 ± 0.4 ProsPr 93.7 ± 0.2 93.7 ± 0.1 93.9 ± 0.1 93.8 ± 0.1 93.8 ± 0.1 93.5 ± 0.2 93.6 ± 0.1 93.4 ± 0.3 93.5 ± 0.2 93.3 ± 0.1 93.0 ± 0.1 93.0 ± 0.1 92.8 ± 0.3 92.7 ± 0.1 92.6 ± 0.1 92.2 ± 0.1 92.1 ± 0.2 91.6 ± 0.4
Table 6: Numerical results for ResNet-18 on TinyImageNet
Sparsity (%) 20.0 36.0 48.8 59.0 67.2 73.8 79.0 83.2 86.6 89.3 91.4 93.1 94.5 95.6 96.5 97.2 97.7 98.2 LTR after Training 51.7 ± 0.2 51.4 ± 0.3 51.5 ± 0.4 52.1 ± 0.4 51.8 ± 0.4 52.0 ± 0.1 52.0 ± 0.1 52.0 ± 0.2 52.1 ± 0.3 52.0 ± 0.2 52.4 ± 0.2 51.8 ± 0.4 51.8 ± 0.6 51.4 ± 0.4 50.9 ± 0.2 49.3 ± 0.7 48.3 ± 0.7 46.0 ± 0.3 Magnitude after Training 51.7 ± 0.3 51.4 ± 0.1 51.7 ± 0.2 51.5 ± 0.3 51.7 ± 0.4 51.4 ± 0.5 51.1 ± 0.3 51.4 ± 0.4 51.3 ± 0.4 51.1 ± 0.6 51.7 ± 0.3 51.3 ± 0.3 51.8 ± 0.4 51.2 ± 0.3 51.1 ± 0.2 50.4 ± 0.2 49.0 ± 0.2 47.8 ± 0.5 Magnitude at Initialization 51.0 ± 0.3 51.2 ± 0.3 51.0 ± 0.2 50.5 ± 0.5 50.6 ± 0.3 50.0 ± 0.3 50.3 ± 0.2 50.3 ± 0.3 50.0 ± 0.1 49.8 ± 0.5 49.0 ± 0.1 48.3 ± 0.3 47.2 ± 0.2 46.2 ± 0.2 44.4 ± 0.5 42.2 ± 0.1 40.8 ± 0.4 38.1 ± 0.6 SNIP 51.4 ± 0.2 51.5 ± 0.3 51.4 ± 0.3 51.3 ± 0.5 51.6 ± 0.4 51.4 ± 0.5 51.9 ± 0.6 51.5 ± 0.3 51.0 ± 0.2 51.2 ± 0.7 50.6 ± 0.3 50.1 ± 0.3 49.2 ± 0.3 47.8 ± 0.2 46.7 ± 0.1 45.2 ± 0.4 44.5 ± 0.3 42.3 ± 0.3 GraSP 49.8 ± 0.4 49.1 ± 0.3 49.5 ± 0.2 49.5 ± 0.4 49.2 ± 0.1 49.5 ± 0.2 48.7 ± 0.1 49.0 ± 0.5 48.8 ± 0.4 48.3 ± 0.1 48.2 ± 0.1 47.7 ± 0.2 46.5 ± 0.1 45.5 ± 0.7 44.9 ± 0.2 44.1 ± 1.0 42.9 ± 0.5 41.0 ± 0.1 SynFlow 51.8 ± 0.3 51.6 ± 0.3 51.7 ± 0.7 51.8 ± 0.2 51.3 ± 0.4 51.3 ± 0.4 51.5 ± 0.2 51.0 ± 0.4 50.2 ± 0.4 50.4 ± 0.3 49.1 ± 0.0 48.0 ± 0.5 46.7 ± 0.7 45.6 ± 0.0 44.0 ± 0.2 42.2 ± 0.3 40.0 ± 0.1 38.2 ± 0.5 Random 50.6 ± 0.5 50.1 ± 0.2 49.9 ± 0.3 48.7 ± 0.2 48.0 ± 0.4 48.0 ± 0.6 46.4 ± 0.1 45.9 ± 0.5 44.7 ± 0.2 43.6 ± 0.3 42.7 ± 0.2 41.4 ± 0.4 40.2 ± 0.2 37.2 ± 0.2 36.2 ± 0.7 34.0 ± 0.4 32.2 ± 0.5 30.0 ± 0.3 ProsPr 51.8 ± 0.4 51.4 ± 0.7 51.2 ± 0.9 52.0 ± 0.2 51.8 ± 0.1 51.2 ± 0.4 52.0 ± 0.3 51.6 ± 0.7 51.1 ± 0.4 50.7 ± 0.6 50.9 ± 0.3 50.8 ± 1.2 51.1 ± 0.7 50.8 ± 0.5 50.8 ± 0.3 49.6 ± 0.6 49.2 ± 0.2 46.9 ± 0.7
C WALL CLOCK TIME FOR STRUCTURE PRUNING AT INITIALIZATION
When pruning is done at convergence, the benefits of having a compressed model (in terms of memory saving and speed-up) can only be utilized at inference/deployment time. However, with pruning-at-initialization these benefits can be reaped during training as well. This is especially true in the case of structured pruning, where pruning results in weight and convolutional kernels with smaller dimensions (as opposed to unstructured pruning, where we end up with sparse weights with the original dimensions). This means that in addition to memory savings, training take fewer operations which speeds up training. To evaluate the benefits of training at initialization in terms of speed improvements we measured the wall-time training time on an NVIDIA RTX 2080 Ti GPU for the architectures used in Section 4.3 (an additionally on ImageNet dataset). The results in Table 7 show that structured pruning with ProsPr can significantly reduce the overall training time.
D RESULTS ON SEGMENTATION TASK
An interesting, albeit less common, application for pruning models is within the context of segmentation. In a recent paper Jeong et al. (2021) train and prune the U-Net (Ronneberger et al., 2015) architecture on two image datasets from the Cell Tracking Challenge (PhC-C2DH-U373 and DICC2DH-HeLa). They use the classic multi-step approach of gradually applying magnitude-pruning interleaved with fine-tuning stages. To evaluate the flexibility of our method we used meta-gradients at the beginning of training (on a randomly initialized U-Net), prune in a single shot, and train the network once for the same number of epochs (50). We kept the training set-up the same as the baseline by Jeong et al. (2021) (i.e., resizing images and segmentation maps to (256,256), setting aside 30% of training data for validation) and similarly aim to find the highest prune ratio that does not result in IOU degradation. We report intersection-over-union (IOU) metric for the two datasets in Tables 8 and 9:
Table 8: Mean-IOU on U373 validation
Method Prune Ratio Mean IOU
Unpruned - 0.9371 Jeong et al. 95% 0.9368 ProsPr 97% 0.9369
Table 9: Mean-IOU on HeLa validation
Method Prune Ratio Mean IOU
Unpruned - 0.7514 Jeong et al. 81.8% 0.7411 ProsPr 90% 0.7491
These results show that our method works as well (or better) compared to this compute-expensive baseline, in the sense that we can prune more parameters while keeping the IOU score the same.
E SELF-SUPERVISED INITIALIZATION
To evaluate the robustness and consistency of our method against non-random initialization we ran experiments using BYOL to learn representations from unlabeled samples (Grill et al., 2020). We used ResNet-18 as a backbone and trained for 1000 epochs with an embedding size of 64. Unlike the vanilla ResNet-18 architecture used in Section 4.3 we used the commonly-used modified version of ResNet-18 for smaller inputs (removing the first pooling layer and modifying the first convolutional layer to have kernel kernel size of 3, stride of 1, and padding size of 1). We then used this trained ResNet18 as the initialization for our meta-gradient pruning method. After the pruning step, all layers were trained as before until convergence. All training hyper-parameters were kept as before. The results (final test accuracies for 95% pruning) are summarized in Table 10.
These results show the robustness of our method for this particular self-supervised initialization. Starting from a learned representation can be challenging because these representations are much closer to weight values at convergence, and therefore the magnitude of their gradients is significantly smaller than randomly initialized weights. However, this is less of a problem for meta-gradients as their magnitude is still significant due to back-propagation through training steps. This can be seen in Figure 3 which shows the L2 norm of gradients of each layer of a BYOL-initialized ResNet-18 for meta-gradients compared to normal gradients. It can be seen that meta-gradients provide a stronger signal compared to normal gradients. | 1. What are the strengths and weaknesses of the paper regarding its contribution to pruning neural networks at initialization?
2. How does the proposed method, Prospect Pruning, differ from existing methods of pruning at initialization?
3. Can you explain the first-order approximation used in the proposed method and how it affects the accuracy and sparsity of the model?
4. How does the number of initial training steps, M, affect the accuracy and sparsity of the model?
5. What is the error bound between the approximated meta-gradient and the ground-truth meta-gradient, and how does it relate to the discrepancy between the approximated gradient and the ground-truth meta-gradient?
6. How does the proposed method behave when the weights are learned from unlabeled samples, as in self-supervised learning?
7. How does the novelty of the proposed method compare to other works in the field, such as SNIP? | Summary Of The Paper
Review | Summary Of The Paper
This work studies the problem of pruning neural networks at initialization. It first identifies that the saliency score defined by the existing method SNIP has room for improvement. Specifically, the authors propose a method named prospect pruning to take into account the sequence of weight updates to determine the pruning mask. The experimental results on Tiny ImageNet and CIFAR show that the proposed method achieves better performance than existing methods of pruning at initialization.
Review
Strengths:
This paper is well-written.
the proposed method is easy to implement.
The experimental results support the claim that the proposed method achieves higher accuracy compared to existing pruning-at-init methods.
Weaknesses:
According to the proposed first-order approximation, it seems that the proposed method is limited to apply with the optimization method SGD. There is a variety of models that are trained with other optimization methods, like Adam, Adamw, and Adabound. It would be helpful to provide a discussion on the direction of how to adapt the proposed method to work with generic (or specific) optimization methods.
The first-order approximation (Eq. (17)) may not be well studied. The following questions are not clear: Is it sensitive to how
W
i
n
i
t
is initialized (e.g., xavier_uniform, kaiming_normal, etc)? Is it robust to the sequence data? How does the number
M
of initial training steps affect the accuracy and sparsity?
An important step to the first-order approximation is to drop the higher-order terms of Eq. (15), i.e.,
∏
m
=
1
M
I
−
α
∇
w
m
−
1
2
L
(
w
m
−
1
;
D
)
. A follow-up question is, what is the error bound (w.r.t. M) between the approximated meta-gradient and the ground-truth meta-gradient? For example, assume that the loss function meets the second-order necessary condition
∇
w
m
−
1
2
L
(
w
m
−
1
;
D
)
⪰
0
, as
M
gets large enough, the operation
∏
m
=
1
M
I
−
α
∇
w
m
−
1
2
L
(
w
m
−
1
;
D
)
could lead to vanishing and exploding gradients. However, the approximated gradient
∇
w
M
L
(
w
m
−
1
;
D
)
w
i
n
i
t
seems to be still robust. How to interpret the discrepancy?
Figure 2 shows that pruning at convergence achieves a better trade-off between accuracy and sparsity than pruning at initialization. So it is not clear that the advantage(s) of the techniques of pruning at initialization over the ones of pruning at initialization. I looked into the introduction and the related work of pruning at initialization. I didn't find the points related to this. It would be better to explicitly discuss the comparison to make the motivation clearer. If I missed something, please point it out.
The algorithm presumes that
w
i
n
i
t
is randomly initialized. It would be interesting to verify if the proposed method behaves consistently with
w
i
n
i
t
that is learned from unlabeled samples, which is a common practice in self-supervised learning.
The related work of meta-gradients misses a related work, that is, meta-gradient in semi-supervised learning [r1].
This is not a critical comment, but I'd like to bring it to the discussion. Regarding the novelty, the proposed method relies on the saliency score (Eq. 1 and Eq. 11) defined in SNIP. This may make this work arguably look incremental. The authors present a discussion "our method, to be introduced in Section 3, also relies on computing the saliency scores for each element in the mask but uses a more sophisticated loss function to incorporate the notion of trainability into the objective." Still, I think the work is a bit weak in terms of the novelty of the methodology.
References:
[r1] Xiao, Taihong, Xin-Yu Zhang, Haolin Jia, Ming-Ming Cheng, and Ming-Hsuan Yang. "Semi-Supervised Learning with Meta-Gradient." In International Conference on Artificial Intelligence and Statistics, pp. 73-81. PMLR, 2021. |
ICLR | Title
Prospect Pruning: Finding Trainable Weights at Initialization using Meta-Gradients
Abstract
Pruning neural networks at initialization would enable us to find sparse models that retain the accuracy of the original network while consuming fewer computational resources for training and inference. However, current methods are insufficient to enable this optimization and lead to a large degradation in model performance. In this paper, we identify a fundamental limitation in the formulation of current methods, namely that their saliency criteria look at a single step at the start of training without taking into account the trainability of the network. While pruning iteratively and gradually has been shown to improve pruning performance, explicit consideration of the training stage that will immediately follow pruning has so far been absent from the computation of the saliency criterion. To overcome the short-sightedness of existing methods, we propose Prospect Pruning (ProsPr), which uses meta-gradients through the first few steps of optimization to determine which weights to prune. ProsPr combines an estimate of the higherorder effects of pruning on the loss and the optimization trajectory to identify the trainable sub-network. Our method achieves state-of-the-art pruning performance on a variety of vision classification tasks, with less data and in a single shot compared to existing pruning-at-initialization methods. Our code is available online at https://github.com/mil-ad/prospr.
1 INTRODUCTION
Pruning at initialization—where we remove weights from a model before training begins—is a recent and promising area of research that enables us to enjoy the benefits of pruning at training time, and which may aid our understanding of training deep neural networks.
Frankle & Carbin (2019) provide empirical evidence for the existence of sparse sub-networks that can be trained from initialization and achieve accuracies comparable to the original network. These “winning tickets” were originally found in an iterative process where, in each iteration, the network is trained to full convergence followed by pruning a subset of the weights by magnitude. The values of the remaining weights are then rewound to their value at initialization, and the process is repeated iteratively until the desired sparsity level is achieved.
This process, known as Lottery Ticket Rewinding (LTR), is very compute-intensive and is prone to failures. For instance, Frankle et al. (2020) show better results by rewinding weights not all the way back to initialization, but to early stages of training instead. LTR is especially prone to failure for more difficult problems (e.g., training on ImageNet), where we must rewind weights to their state several epochs into training.
A recent line of work proposes alternative practical solutions to identify these sub-networks before training begins, without the cost of retraining the network iteratively Lee et al. (2018); Wang et al. (2020); de Jorge et al. (2021); Tanaka et al. (2020). This class of methods uses gradients to assess
∗Corresponding author. Contact at milad.alizadeh@cs.ox.ac.uk
the importance of neural network weights. These gradients are often known as Synaptic Saliencies and are used to estimate the effect of pruning a single parameter in isolation on various objectives, typically the loss function. This objective is not so different from classical pruning-at-convergence methods, but the gradients for a well-trained model are small; therefore these methods must inspect higher-order metrics such as the Hessian to estimate the pruning effect (LeCun et al., 1990; Hassibi & Stork, 1993). Pruning at initialization is desirable because the benefits of pruning (in terms of memory and speed) can be reaped during training, rather than only at inference/deployment time.
However, the performance of prune-at-init methods remains poor: the degradation in accuracy is still significant compared to training the full model and LTR, making these methods impractical for many real-world problems (Frankle et al., 2021). In this paper, we identify a fundamental limitation in the objective formulation of current methods, namely that saliency criteria do not take into account the fact that the model is going to be trained after the pruning step. If our aim was to simply prune a subset of weights without affecting the loss, then these saliency criteria are estimating the correct objective. However, this estimate does not take into account that we are going to train the weights after we prune them. We need a metric that captures the trainability of the weights during the optimization steps, rather than a single myopic estimate.
Many methods attempt to overcome this by pruning gradually and/or adding training steps between iterative pruning steps (Zhu & Gupta, 2018; You et al., 2020; de Jorge et al., 2021). Although this approach has been shown to be effective, it is expensive and cumbersome in practice and ultimately is an indirect approximation to the trainability criteria we are looking to incorporate into our objective.
In this paper, we propose Prospect Pruning (ProsPr), a new pruning-at-init method that learns from the first few steps of optimization which parameters to prune. We explicitly formulate our saliency criteria to account for the fact that the network will be trained after pruning. More precisely, ProsPr uses meta-gradients by backpropagating through the first few model updates in order to estimate the effect the initial pruning parameters have on the loss after a few gradient descent steps. Effectively this enables us to account for both higher-order effects of pruning weights on the loss, as well as the trainability of individual weights. Similar to other methods we apply pruning to initialization values of weights and train our models from scratch. In summary, our contributions are:
• We identify a key limitation in prior saliency criteria for pruning neural networks—namely that they do not explicitly incorporate trainability-after-pruning into their criteria.
• We propose a new pruning-at-init method, ProsPr, that uses meta-gradients over the first few training steps to bridge the gap between pruning and training.
• We show empirically that ProsPr achieves higher accuracy compared to existing pruningat-init methods. Unlike other methods, our approach is single shot in the sense that the pruning is applied to the network initial weights in a single step.
2 BACKGROUND
In this section we review the key concepts that our method builds upon. We delay comparisons to other pruning techniques in the literature to Section 5.
Classic post-training pruning methods aim to identify and remove network weights with the least impact on the loss (LeCun et al., 1990; Hassibi & Stork, 1993). They typically use the Taylor expansion of the loss with respect to parameters to define a saliency score for each parameter: δL ≈ ∇θL⊤δθ + 12δθ
⊤H δθ, where H = ∇2θL is the Hessian matrix. When the network has converged, the first-order term in the expansion is negligible, and hence these methods resort to using H.
Lee et al. (2018) introduce SNIP, and show that the same objective of minimizing the change in loss can be used at initialization to obtain a trainable pruned network. At initialization, the first-order gradients ∇θ in the local quadratic approximation are still significant, so higher-order terms can be ignored. Hence the computation of the parameter saliencies can be done using backpropagation.
The Taylor expansion approximates the effect of small additive perturbations to the loss. To better approximate the effect of removing a weight, Lee et al. (2018) attach a multiplicative all-one mask to the computation graph of each weight. This does not change the forward-pass of the network, but it enables us to form the Taylor expansion around the mask values, rather than the weights, to estimate the effect of changing the mask values from 1 to 0. More specifically, SNIP computes the
saliency scores according to:
sj = |gj(w,D)|∑m k=1 |gk(w,D)| , (1)
with
gj(w,D) = ∂L(c⊙w,D)
∂cj , (2)
where m is the number of weights in the network, c ∈ {0, 1}m is the pruning mask (initialised to 1 above), D is the training dataset, w are the neural network weights, L is the loss function, and ⊙ is the Hadamard product. These saliency scores are computed before training the network, using one (or more) mini-batches from the training set. The global Top-K weights with the highest saliency scores are retained (cj = 1), and all other weights are pruned (cj = 0), before the network is trained.
Our method, to be introduced in Section 3, also relies on computing the saliency scores for each mask element, but uses a more sophisticated loss function to incorporate the notion of trainability.
3 OUR METHOD: PROSPR
In this section we introduce our method, Prospect Pruning (ProsPr). We note that for the problem of pruning at initialization, the pruning step is immediately followed by training. Therefore, pruning should take into account the trainability of a weight, instead of only its immediate impact on the loss before training. In other words, we want to be able to identify weights that are not only important at initialization, but which may be useful for reducing the loss during training. To this end, we propose to estimate the effect of pruning on the loss over several steps of gradient descent at the beginning of training, rather than the changes in loss at initialization.
More specifically, ProsPr models how training would happen by performing multiple (M) iterations of backpropagation and weight updates—like during normal training. We can then backpropagate through the entire computation graph, from the loss several steps into training, back to the original mask, since the gradient descent procedure is itself differentiable. Once the pruning mask is computed, we rewind the weights back to their values at initialization and train the pruned network. The gradient-of-gradients is called a meta-gradient. This algorithm is illustrated visually in Figure 1.
The higher-order information in the meta-gradient includes interactions between the weights during training. When pruning at initialization, our ultimate goal is to pick a pruned model, A, which is more trainable than an alternative pruned model B. That means we want the loss L(ŷA, y) to be lower than L(ŷB , y) at convergence (for a fixed pruning ratio). Finding the optimal pruning mask is generally infeasible since the training horizon is long (i.e., evaluation is costly) and the space of possible pruning masks is large. Unlike other methods that must compute the saliency scores iteratively, we can use the meta-gradients to compute the pruning mask in one shot. This picks a line in loss-space, which more closely predicts the eventual actual loss. This is because it smooths out over more steps, and takes into account interactions between weights in the training dynamics. Crucially, in the limit of large M, the match to the ultimate objective is exact.
3.1 SALIENCY SCORES VIA META-GRADIENTS
We now introduce ProsPr formally. After initialising the network weights randomly to obtain winit, we apply a weight mask to the initial weights,
w0 = c⊙winit. (3)
This weight mask contains only ones, c = 1, as in SNIP (Lee et al., 2018), and represents the connectivity of the corresponding weights.
We then sample M+1 batches of data Di ∼ Dtrain (i ∈ {0, . . . ,M}; M ≥ 1) for the pruning step, and perform M weight updates1,
w1 = w0 − α∇w0L(w0,D0) (4) ...
wM = wM−1 − α∇wM−1L(wM−1,DM−1). (5) Then, we compute a meta-gradient that backpropagates through these updates. Specifically, we compute the gradient of the final loss w.r.t. the initial mask,
∇c L(wM ,DM ). (6) Using the chain rule, we can write out the form of the meta-gradient beginning from the last step:
∇cL(wM ,D) = ∇wML(wM ,D)(∇cwM ), (7) repeating for each step until we reach the zero’th step whose gradient is trivial,
= ∇wML(wM ,D)(∇wM−1wM ) . . . (∇w0w1)(∇cw0) (8) = ∇wML(wM ,D)(∇wM−1wM ) . . . (∇w0w1)(∇c(c⊙winit)) (9)
= ∇wML(wM ,D)
[ M∏
m=1
(∇wm−1wm) ] winit. (10)
In practice, we can compute the meta-gradients by relying on automatic differentiation software such as PyTorch (Paszke et al., 2019). However, care must be taken to ensure that weights at each step are kept in memory so that the entire computation graph, including gradients, is visible to the automatic differentiation software. The saliency scores are now given by
sj = |gj(w,D)|∑m k=1 |gk(w,D)| , (11)
with
gj(w,D) = ∂L(wM ,D)
∂cj , (12)
where wM is a function of c. Equation (12) stands in contrast to SNIP, where the saliency is computed using the loss at c ·winit rather than wM . The saliency scores are then used to prune the initial weights winit: the ones with the highest saliency scores are retained (cj = 1), and all other weights are pruned (cj = 0). Finally, the network is trained with the pruned weights ŵinit.
Algorithm 1 summarises the proposed method, ProsPr. 1We formalise the weight updates using vanilla SGD here; in practice these may be different when using approaches such as momentum or BatchNorm (Ioffe & Szegedy, 2015). Since our implementation relies on automatic differentiation in PyTorch (Paszke et al., 2019), we can use any type of update, as long as it is differentiable w.r.t. the initial mask c.
Algorithm 1 ProsPr Pseudo-Code 1: Inputs: a training dataset Dtrain, number of initial training steps M , number of main training steps N
(M ≪ N ), learning rate α 2: Initialise: network weights winit
3: cinit = 1 ▷ Initialise mask with ones 4: w0 = cinit ⊙winit ▷ Apply mask to initial weights 5: for k = 0, . . . ,M − 1 do 6: Dk ∼ Dtrain ▷ Sample batch of data 7: wi+1 = wi − α∇wL(wi,Dk) ▷ Update network weights 8: end for
9: gj(w,D) = ∂L(wM ,D)/∂cj ▷ Compute meta-gradient
10: sj = |gj(w,D)|∑m
k=1 |gk(w,D)|
▷ Compute saliency scores
11: Determine the k-th largest element in s, sk.
12: cprune = { 1, if cj ≥ sk 0, otherwise
▷ Set pruning mask
13: ŵ0 = cprune ⊙winit ▷ Apply mask to initial weights winit
14: for i = 1, . . . , N do ▷ Train pruned model 15: ŵi+1 = ŵi − α∇wL(ŵi,D) 16: end for
3.2 FIRST-ORDER APPROXIMATION
Taking the meta-gradient through many model updates (Equation (6)) can be memory intensive: in the forward pass, all gradients of the individual update steps need to be retained in memory to then be able to backpropagate all the way to the initial mask. However, we only need to perform a few steps2 at the beginning of training so in practice we can perform the pruning step on CPU which usually has access to more memory compared to a GPU. We apply this approach in our own experiments, with overheads of around 30 seconds being observed for the pruning step.
Alternatively, when the number of training steps needs to be large we can use the following firstorder approximation. Using Equation (10), the meta-gradient is:
∇cL(wM ,DM ) = ∇wML(wM ,DM )
[ M∏
m=1
(∇wm−1wm) ] winit, (13)
writing wm in terms of wm−1 following SGD,
= ∇wML(wM ,DM )
[ M∏
m=1
∇wm−1(wm−1 − α∇wm−1L(wm−1;Dm)) ] winit,
(14) carrying through the partial derivative,
= ∇wML(wM ,DM )
[ M∏
m=1
I − α∇2wm−1L(wm−1;Dm) ] winit, (15)
and finally dropping small terms for sufficiently small learning rates,
≈ ∇wML(wM ,DM )
[ M∏
m=1
I ] winit, (16)
= ∇wML(wM ,DM ) winit. (17) In the second-to-last step, we drop the higher-order terms, which gives us a first-order approximation of the meta-gradient3.
2We use 3 steps for experiments on CIFAR-10, CIFAR-100 and TinyImageNet datasets 3Note that this approximation also works for optimisers other than vanilla SGD (e.g., Adam, Adamw, Ad-
abound), except that the term which is dropped (r.h.s. of Equation Equation (15)) looks slightly different.
With this approximation, we only need to save the initial weight vector winit in memory and multiply it with the final gradient. This approximation can be crude when the Laplacian terms are large, but with a sufficiently small learning rate it becomes precise. The approximation allows us to take many more intermediate gradient-steps which can be beneficial for performance when the training dataset has many classes, as we will see in Section 4.2.
4 EXPERIMENTS
We empirically evaluate the performance of our method, ProsPr, compared to various vision classification baselines across different architectures and datasets. In supplementary sections we show effectiveness of our method on image segmentation tasks (Appendix D) and when using selfsupervised initialization (Appendix E). We provide details of our hyper-parameters, experiment setup, and implementation details in Appendix A.
4.1 RESULTS ON CIFAR AND TINY-IMAGENET
In recent work, Frankle et al. (2021) extensively study and evaluate different pruning-at-initialization methods under various effects such as weight re-initialization, weight shuffling, and score inversion. They report the best achievable results by these methods and highlight the gap between their performance and two pruning-at-convergence methods, weight rewinding and magnitude pruning (Renda et al., 2020; Frankle et al., 2020).
In Figure 2 we evaluate ProsPr on this benchmark using ResNet-20 and VGG-16 on CIFAR-10, and ResNet-18 on Tiny-ImageNet. It can be seen that ProsPr reduces the performance gap, especially at higher sparsity levels, and in some cases exceeds the accuracy of pruning-after-convergence methods. Full results are also summarised in Appendix B.
This is a remarkable achievement: ProsPr is the first work to close the gap to methods that prune after training. Previous works that prune at the start have not been able to outperform methods that prune after training on any settings, including smaller datasets such as CIFAR-10 or Tiny-ImageNet. It is also important to note that other baselines that have comparable accuracies are all iterative methods. ProsPr is the only method that can do this in a single shot after using only 3 steps batch-sizes of 512
in the inner-loop before computing the meta-gradients. In total, we only use 4 batches of data. We also do not do any averaging of scores by repeating the method multiple times.
The performance in these small datasets comes from the fact that ProsPr computes higher-order gradients. While there are other iterative methods that can work without any data, their effect is mostly a more graceful degradation at extreme pruning ratios as opposed to best accuracy at more practical sparsity levels. One example is SynFlow which is similar to FORCE but uses an all-one input tensor instead of samples from the training set (Tanaka et al., 2020).
4.2 RESULTS ON IMAGENET DATASET
To evaluate the performance of ProsPr on more difficult tasks we run experiments on the larger ImageNet dataset. Extending gradient-based pruning methods to this dataset poses several challenges.
Number of classes In synaptic-saliency methods, the mini batches must have enough examples from all classes in the dataset. Wang et al. (2020) recommend using class-balanced mini-batches sized ten times the number of classes. In datasets with few classes this is not an issue and even a single batch includes multiple examples per class. This is one reason why methods like SNIP work with a single batch, and why we kept the number of steps in ProsPr’s inner loop fixed to only 3. ImageNet however has 1,000 classes, and using a single or a handful of small batches is inadequate. Previous methods such as FORCE, GraSP, or SynFlow avoid this problem by repeating the algorithm with new data batches and averaging the saliency scores. In ProsPr we instead increase the number of updates before computing the meta-gradients, ensuring they flow through enough data. Computing meta-gradients through many steps however poses new challenges.
Gradient degradation We start to see gradient stability issues when computing gradients over deep loops. Gradient degradation problems, i.e., vanishing and exploding gradients, have also been observed in other fields that use meta-gradients such as Meta-Learning. Many solutions have been proposed to stabilize gradients when the length of loop increases beyond 4 or 5 steps, although this remains an open area of research (Antoniou et al., 2019).
Computation Complexity For ImageNet we must make the inner loop hundreds of steps deep to achieve balanced data representation. In addition to stability issues, backpropagating through hundreds of steps is very compute intensive.
Therefore for our experiments on ImageNet we use the first-order approximation of ProsPr (Sec 3.2). We evaluate ProsPr using ResNet-50 and VGG-19 architectures and compare against state-of-the-art methods FORCE and Iter-SNIP introduced by de Jorge et al. (2021). We include multi-batch versions of SNIP and GraSP (SNIP-MB and GraSP-MB) to provide a fair comparison to iterative methods, which partially prune several times during training, in terms of the amount of data presented to the method. We use 1024 steps with a batch size of 256 (i.e. 262,144 samples) for ResNet-50. For VGG-19, a much larger model, and which requires more GPU memory we do 256 steps with batch size of 128. This is still far fewer samples than other methods. Force, for example, gradually prunes
in 60 steps, where each step involves computing and averaging scores over 40 batches of size 256, i.e. performing backpropagation 2400 times and showing 614,400 samples to the algorithm.
Table 1 shows our results compared to the baselines reported by de Jorge et al. (2021). First-order ProsPr exceeds previous results in all configurations except one, where it is outperformed by GraSP. Note the surprisingly good performance of random pruning of ResNets, which was also observed by de Jorge et al. (2021). This could be explained by the fact that VGG-19 is a much larger architecture with 143.6 million parameters, compared to 15.5 million in ResNet-50s. More specifically the final three dense layers of VGG-19 constitute 86% of its total prunable parameters. The convolution layers of VGG constitute only 14% of the prunable weights. Pruning methods are therefore able to keep more of the convolution weights and instead prune extensively from the over-parametrized dense layers. ResNet architectures on the hand have a single dense classifier at the end.
4.3 STRUCTURED PRUNING
We also evaluate ProsPr in the structured pruning setup where instead of pruning individual weights, entire channels (or columns of linear layers) are removed. This is a more restricted setup, however it offers memory savings and reduces the computational cost of training and inference.
Adopting ProsPr for structured pruning is as simple as changing the shape of the pruning mask c in Eq 3 to have one entry per channel (or column of the weight matrix). We evaluate our method against 3SP, a method that extends SNIP to structured pruning (van Amersfoort et al., 2020). Our results are summarized in Table 2 which show accuracy improvements in all scenarios. In Appendix C we also evaluate wall-clock improvements in training time as a result of structured pruning at initialization.
4.4 NUMBER OF META STEPS
Finally, we evaluate ProsPr when using a varying number of meta steps, which gives insight into whether using meta-gradients is beneficial. We repeated experiments from Section 4.3 but this time we vary the depth of training steps between 0 and 3. The results in Table 3 show that the final accuracy consistently increases as we increase the depth of the training, showing the effectiveness of meta-gradients. We used the same data batch in all M training steps to isolate the effect of M, while in other experiments we use a new batch in every step.
In theory increasing the number of training steps should always help and match the ultimate objective (estimating the loss after many epochs of training) in the limit. However, in practice increasing the number of steps beyond 3 poses a lot of gradient stability issues (and is computationally expensive). These issues have been also identified in the meta-learning literature (Antoniou et al., 2019).
5 RELATED WORK
Pruning at initialization Several works extend the approach proposed by Lee et al. (2018). de Jorge et al. (2021) evaluate SNIP objective in a loop in which pruned parameters still receive gradients and therefore have a chance to get un-pruned. The gradual pruning helps avoid the layercollapse issue, and their method, known as FORCE, achieves better performance at extreme sparsity levels. Tanaka et al. (2020) provide theoretical justification for why iteratively pruning helps with the layer-collapse issue and propose a data-free version of the method where an all-one input tensor is used instead of real training data. Wang et al. (2020) propose an alternative criterion to minimizing changes in the loss and instead argue for preserving the gradient flow. Their method, GraSP, keeps weights that contribute most to the norm of the gradients. van Amersfoort et al. (2020) extends SNIP and GraSP to structured pruning to make training and inference faster. They further augment the scores by their compute cost to push the pruning decision towards more FLOPS reduction.
Gradual pruning As discussed in Section 1, in existing methods the training step has been absent from the saliency computation step. As a workaround, many methods make their approaches training-aware by applying pruning gradually and interleaving it with training: Zhu & Gupta (2018) proposed an exponential schedule for pruning-during-training and Gale et al. (2019) showed its effectiveness in a broader range of tasks. Frankle & Carbin (2019) show that weight rewinding achieves better results when done in multiple prune-retrain steps. Lym et al. (2019) continuously apply structured pruning via group-lasso regularization while at the same time increasing batch sizes. You et al. (2020) find pruned architectures after a few epochs of training-and-pruning and monitoring a distance metric.
Meta-Gradients Backpropagation through gradients, and its first-order approximation, is also used in model-agnostic meta-learning literature (Finn et al., 2017; Zintgraf et al., 2019) where the objective is to find a model that can be adapted to new data in a few training steps. Similar to our setup, the meta-loss captures the trainability of a model, but additionally, the meta-gradients are used to update the network’s weights in a second loop. In self-supervised learning setting, Xiao et al. (2021) use meta-gradients to explicitly optimize a learn-to-generalize regularization term in nested meta-learning loops. Computing gradients-of-gradients is also used to regularize loss with a penalty on the gradients, for instance, to enforce Lipschitz continuity on the network (Gulrajani et al., 2017) or to control different norms of the gradients (Alizadeh et al., 2020).
6 DISCUSSION
Although pruning at initialization has the potential to greatly reduce the cost of training neural networks, existing methods have not lived up to their promise. We argue that this is, in part, because they do not account for the fact that the pruned network is going to be trained after it is pruned. We take this into account, using a saliency score that captures the effect of a pruning mask on the training procedure. As a result, our method is competitive not just with methods that prune before training, but also with methods that prune iteratively during training and those that prune after training. In principle, compressing neural networks at initialization has the potential to reduce energy and environmental costs of machine learning. Beyond our context, taking into account that methods which prune-at-convergence generally have to be fine-tuned, it is possible that our work could have further implications for these pruning methods as well (Molchanov et al., 2016; Wang et al., 2019).
ACKNOWLEDGMENTS
Milad Alizadeh is grateful for funding by the EPSRC (grant references EP/R512333/1) and Arm (via NPIF 2017 studentship). Shyam Tailor is supported by EPSRC grants EP/M50659X/1 and EP/S001530/1 (the MOA project) and the European Research Council via the REDIAL project (Grant Agreement ID: 805194). Luisa Zintgraf is supported by the 2017 Microsoft Research PhD Scholarship Program, and the 2020 Microsoft Research EMEA PhD Award. Joost van Amersfoort is grateful for funding by the EPSRC (grant reference EP/N509711/1) and Google-DeepMind. Sebastian Farquhar is supported by the EPSRC via the Centre for Doctoral Training in Cybersecurity at the University of Oxford as well as Christ Church, University of Oxford.
A EXPERIMENTAL SETUP
A.1 ARCHITECTURE DETAILS
We use standard VGG and ResNet models provided by torchvision throughout this work where possible. The ResNet-20 model, which is not commonly evaluated, was implemented to match the version used by Frankle et al. (2021) so that we could compare using the benchmark supplied by this paper.
For smaller datasets, it is common to patch models defined for ImageNet. Specifically, for ResNets, we replace the first convolution with one 3 × 3 filter size, and stride 1; the first max-pooling layer is replaced with an identity operation. For VGG, we follow the convention used by works such as FORCE (de Jorge et al., 2021). We do not change any convolutional layers, but we change the classifier to use a single global average pooling layer, followed by a single fully-connected layer.
A.2 TRAINING DETAILS
For CIFAR-10, CIFAR-100 and TinyImageNet we perform 3 meta-steps to calculate our saliency criteria. We train the resulting models for 200 epochs, with initial learning rate 0.1; we divide the learning rate by 10 at epochs 100 and 150. Weight decay was set to 5×10−4. Batch size for CIFAR10, CIFAR-100, and TinyImageNet was 256. For CIFAR-10 and CIFAR-100 we augment training data by applying random cropping (32× 32, padding 4), and horizontal flipping. For TinyImageNet we use the same procedure, with random cropping parameters set to 64× 64, padding 4. For ImageNet we train models for 100 epochs, with an initial learning rate of 0.1; we divide the learning rate by 10 at epochs 30, 60 and 90. Weight decay was set to 1× 10−4. Batch size was 256. We use the first order approximation to do pruning, and use 1024 steps for ResNet-50. For VGG-19 we use 2048 steps, but with batch size set to 128 (due to memory limitations, as our implementation only utilized a single GPU for meta-training). We apply random resizing, then crop the image to 224× 224, with horizontal flipping.
A.3 IMPLEMENTATIONS
In addition to our code, the reader may find it useful to reference the following repos from related work. Our experiments were performed using code derived from these implementations:
B NUMBERS FROM FIGURE 2
Sparsity (%) 20.0 36.0 48.8 59.0 67.2 73.8 79.0 83.2 86.6 89.3 91.4 93.1 94.5 95.6 96.5 97.2 97.7 98.2 LTR after Training 91.8 ± 0.2 91.9 ± 0.2 91.9 ± 0.2 91.7 ± 0.2 91.5 ± 0.1 91.4 ± 0.1 91.1 ± 0.1 90.6 ± 0.1 90.1 ± 0.0 89.2 ± 0.1 88.0 ± 0.2 86.8 ± 0.2 85.7 ± 0.1 84.4 ± 0.2 82.8 ± 0.1 81.2 ± 0.3 79.4 ± 0.3 77.3 ± 0.5 Magnitude after Training 92.2 ± 0.3 92.0 ± 0.2 92.0 ± 0.2 91.7 ± 0.1 91.5 ± 0.2 91.3 ± 0.2 91.1 ± 0.2 90.7 ± 0.2 90.2 ± 0.2 89.4 ± 0.2 88.7 ± 0.2 87.7 ± 0.2 86.5 ± 0.2 85.2 ± 0.2 83.5 ± 0.3 81.9 ± 0.3 80.4 ± 0.2 77.7 ± 0.4 Magnitude at Initialization 91.5 ± 0.2 91.2 ± 0.1 90.8 ± 0.1 90.7 ± 0.2 90.2 ± 0.1 89.8 ± 0.2 89.3 ± 0.2 88.6 ± 0.2 87.9 ± 0.3 87.0 ± 0.3 86.1 ± 0.2 85.2 ± 0.4 83.9 ± 0.2 82.5 ± 0.4 80.7 ± 0.5 79.1 ± 0.4 77.2 ± 0.4 74.5 ± 0.7 SNIP 91.8 ± 0.2 91.2 ± 0.3 90.9 ± 0.1 90.7 ± 0.1 90.1 ± 0.2 89.7 ± 0.3 89.0 ± 0.2 88.5 ± 0.3 87.7 ± 0.2 87.2 ± 0.4 85.8 ± 0.1 84.7 ± 0.5 83.8 ± 0.3 82.5 ± 0.4 80.9 ± 0.2 79.1 ± 0.2 77.3 ± 0.2 74.0 ± 0.5 GraSP 91.5 ± 0.1 91.3 ± 0.2 91.2 ± 0.1 90.6 ± 0.2 90.3 ± 0.2 89.6 ± 0.1 89.1 ± 0.2 88.4 ± 0.2 87.9 ± 0.1 87.0 ± 0.2 85.9 ± 0.1 85.1 ± 0.4 83.9 ± 0.4 82.8 ± 0.2 81.2 ± 0.2 79.7 ± 0.3 78.0 ± 0.3 76.0 ± 0.5 SynFlow 91.7 ± 0.1 91.3 ± 0.2 91.2 ± 0.1 90.8 ± 0.1 90.4 ± 0.2 89.8 ± 0.1 89.5 ± 0.3 88.9 ± 0.4 88.1 ± 0.1 87.4 ± 0.5 86.1 ± 0.2 85.4 ± 0.2 84.3 ± 0.2 82.9 ± 0.2 81.7 ± 0.2 80.0 ± 0.3 78.6 ± 0.4 76.4 ± 0.4 Random 91.6 ± 0.2 91.2 ± 0.2 90.8 ± 0.3 90.5 ± 0.2 89.8 ± 0.2 89.0 ± 0.4 88.4 ± 0.2 87.5 ± 0.3 86.6 ± 0.2 85.6 ± 0.3 84.3 ± 0.4 83.1 ± 0.4 81.6 ± 0.3 79.6 ± 0.4 74.2 ± 6.4 64.7 ± 9.7 56.9 ± 8.5 43.7 ± 12.5 ProsPr 92.3 ± 0.1 92.1 ± 0.0 91.7 ± 0.2 91.5 ± 0.1 91.0 ±0.2 90.5 ± 0.0 90.1 ± 0.1 89.6 ± 0.2 88.5 ± 0.5 87.8 ± 0.1 86.9 ± 0.3 85.5 ± 0.6 84.3 ± 0.2 83.0 ± 0.9 80.8 ± 0.5 79.6 ± 0.7 77.0 ± 0.8 74.2 ± 0.3
Table 5: Numerical results for VGG-16 on CIFAR-10
Sparsity (%) 20.0 36.0 48.8 59.0 67.2 73.8 79.0 83.2 86.6 89.3 91.4 93.1 94.5 95.6 96.5 97.2 97.7 98.2 LTR after Training 93.5 ± 0.1 93.6 ± 0.1 93.6 ± 0.1 93.6 ± 0.1 93.8 ± 0.1 93.6 ± 0.1 93.6 ± 0.1 93.8 ± 0.1 93.8 ± 0.1 93.7 ± 0.1 93.7 ± 0.1 93.8 ± 0.1 93.5 ± 0.2 93.4 ± 0.1 93.2 ± 0.1 93.0 ± 0.2 92.7 ± 0.1 92.1 ± 0.4 Magnitude after Training 93.9 ± 0.2 93.9 ± 0.2 93.8 ± 0.1 93.8 ± 0.1 93.9 ± 0.1 94.0 ± 0.2 93.8 ± 0.1 93.8 ± 0.1 93.9 ± 0.2 93.9 ± 0.2 93.8 ± 0.2 93.7 ± 0.2 93.5 ± 0.1 93.5 ± 0.1 93.3 ± 0.2 93.0 ± 0.1 92.9 ± 0.1 91.7 ± 0.8 Magnitude at Initialization 93.6 ± 0.2 93.4 ± 0.2 93.3 ± 0.1 93.2 ± 0.2 93.3 ± 0.3 93.0 ± 0.1 93.1 ± 0.1 92.9 ± 0.1 92.9 ± 0.2 92.7 ± 0.1 92.5 ± 0.2 92.3 ± 0.1 92.2 ± 0.2 92.0 ± 0.1 91.8 ± 0.2 91.5 ± 0.1 91.3 ± 0.3 90.9 ± 0.2 SNIP 93.6 ± 0.1 93.4 ± 0.1 93.3 ± 0.1 93.4 ± 0.2 93.3 ± 0.2 93.4 ± 0.1 93.1 ± 0.1 93.1 ± 0.1 93.2 ± 0.1 93.1 ± 0.1 92.9 ± 0.1 92.8 ± 0.2 92.8 ± 0.1 92.3 ± 0.2 92.2 ± 0.1 92.1 ± 0.1 91.7 ± 0.1 91.5 ± 0.1 GraSP 93.5 ± 0.1 93.4 ± 0.2 93.5 ± 0.0 93.3 ± 0.1 93.2 ± 0.2 93.3 ± 0.2 93.2 ± 0.1 93.0 ± 0.3 93.0 ± 0.1 92.7 ± 0.2 92.8 ± 0.1 92.4 ± 0.1 92.3 ± 0.1 92.2 ± 0.1 91.9 ± 0.1 91.6 ± 0.2 91.5 ± 0.0 91.2 ± 0.2 SynFlow 93.6 ± 0.2 93.6 ± 0.1 93.5 ± 0.1 93.4 ± 0.1 93.4 ± 0.2 93.5 ± 0.2 93.2 ± 0.1 93.2 ± 0.1 93.1 ± 0.1 92.9 ± 0.1 92.7 ± 0.2 92.5 ± 0.1 92.3 ± 0.1 92.0 ± 0.1 91.8 ± 0.3 91.3 ± 0.1 91.0 ± 0.2 90.6 ± 0.2 Random 93.6 ± 0.3 93.2 ± 0.1 93.2 ± 0.2 93.0 ± 0.2 92.7 ± 0.2 92.4 ± 0.2 92.2 ± 0.1 91.7 ± 0.1 91.2 ± 0.1 90.8 ± 0.2 90.3 ± 0.2 89.6 ± 0.2 88.8 ± 0.2 88.3 ± 0.4 87.6 ± 0.1 86.4 ± 0.2 86.0 ± 0.4 84.5 ± 0.4 ProsPr 93.7 ± 0.2 93.7 ± 0.1 93.9 ± 0.1 93.8 ± 0.1 93.8 ± 0.1 93.5 ± 0.2 93.6 ± 0.1 93.4 ± 0.3 93.5 ± 0.2 93.3 ± 0.1 93.0 ± 0.1 93.0 ± 0.1 92.8 ± 0.3 92.7 ± 0.1 92.6 ± 0.1 92.2 ± 0.1 92.1 ± 0.2 91.6 ± 0.4
Table 6: Numerical results for ResNet-18 on TinyImageNet
Sparsity (%) 20.0 36.0 48.8 59.0 67.2 73.8 79.0 83.2 86.6 89.3 91.4 93.1 94.5 95.6 96.5 97.2 97.7 98.2 LTR after Training 51.7 ± 0.2 51.4 ± 0.3 51.5 ± 0.4 52.1 ± 0.4 51.8 ± 0.4 52.0 ± 0.1 52.0 ± 0.1 52.0 ± 0.2 52.1 ± 0.3 52.0 ± 0.2 52.4 ± 0.2 51.8 ± 0.4 51.8 ± 0.6 51.4 ± 0.4 50.9 ± 0.2 49.3 ± 0.7 48.3 ± 0.7 46.0 ± 0.3 Magnitude after Training 51.7 ± 0.3 51.4 ± 0.1 51.7 ± 0.2 51.5 ± 0.3 51.7 ± 0.4 51.4 ± 0.5 51.1 ± 0.3 51.4 ± 0.4 51.3 ± 0.4 51.1 ± 0.6 51.7 ± 0.3 51.3 ± 0.3 51.8 ± 0.4 51.2 ± 0.3 51.1 ± 0.2 50.4 ± 0.2 49.0 ± 0.2 47.8 ± 0.5 Magnitude at Initialization 51.0 ± 0.3 51.2 ± 0.3 51.0 ± 0.2 50.5 ± 0.5 50.6 ± 0.3 50.0 ± 0.3 50.3 ± 0.2 50.3 ± 0.3 50.0 ± 0.1 49.8 ± 0.5 49.0 ± 0.1 48.3 ± 0.3 47.2 ± 0.2 46.2 ± 0.2 44.4 ± 0.5 42.2 ± 0.1 40.8 ± 0.4 38.1 ± 0.6 SNIP 51.4 ± 0.2 51.5 ± 0.3 51.4 ± 0.3 51.3 ± 0.5 51.6 ± 0.4 51.4 ± 0.5 51.9 ± 0.6 51.5 ± 0.3 51.0 ± 0.2 51.2 ± 0.7 50.6 ± 0.3 50.1 ± 0.3 49.2 ± 0.3 47.8 ± 0.2 46.7 ± 0.1 45.2 ± 0.4 44.5 ± 0.3 42.3 ± 0.3 GraSP 49.8 ± 0.4 49.1 ± 0.3 49.5 ± 0.2 49.5 ± 0.4 49.2 ± 0.1 49.5 ± 0.2 48.7 ± 0.1 49.0 ± 0.5 48.8 ± 0.4 48.3 ± 0.1 48.2 ± 0.1 47.7 ± 0.2 46.5 ± 0.1 45.5 ± 0.7 44.9 ± 0.2 44.1 ± 1.0 42.9 ± 0.5 41.0 ± 0.1 SynFlow 51.8 ± 0.3 51.6 ± 0.3 51.7 ± 0.7 51.8 ± 0.2 51.3 ± 0.4 51.3 ± 0.4 51.5 ± 0.2 51.0 ± 0.4 50.2 ± 0.4 50.4 ± 0.3 49.1 ± 0.0 48.0 ± 0.5 46.7 ± 0.7 45.6 ± 0.0 44.0 ± 0.2 42.2 ± 0.3 40.0 ± 0.1 38.2 ± 0.5 Random 50.6 ± 0.5 50.1 ± 0.2 49.9 ± 0.3 48.7 ± 0.2 48.0 ± 0.4 48.0 ± 0.6 46.4 ± 0.1 45.9 ± 0.5 44.7 ± 0.2 43.6 ± 0.3 42.7 ± 0.2 41.4 ± 0.4 40.2 ± 0.2 37.2 ± 0.2 36.2 ± 0.7 34.0 ± 0.4 32.2 ± 0.5 30.0 ± 0.3 ProsPr 51.8 ± 0.4 51.4 ± 0.7 51.2 ± 0.9 52.0 ± 0.2 51.8 ± 0.1 51.2 ± 0.4 52.0 ± 0.3 51.6 ± 0.7 51.1 ± 0.4 50.7 ± 0.6 50.9 ± 0.3 50.8 ± 1.2 51.1 ± 0.7 50.8 ± 0.5 50.8 ± 0.3 49.6 ± 0.6 49.2 ± 0.2 46.9 ± 0.7
C WALL CLOCK TIME FOR STRUCTURE PRUNING AT INITIALIZATION
When pruning is done at convergence, the benefits of having a compressed model (in terms of memory saving and speed-up) can only be utilized at inference/deployment time. However, with pruning-at-initialization these benefits can be reaped during training as well. This is especially true in the case of structured pruning, where pruning results in weight and convolutional kernels with smaller dimensions (as opposed to unstructured pruning, where we end up with sparse weights with the original dimensions). This means that in addition to memory savings, training take fewer operations which speeds up training. To evaluate the benefits of training at initialization in terms of speed improvements we measured the wall-time training time on an NVIDIA RTX 2080 Ti GPU for the architectures used in Section 4.3 (an additionally on ImageNet dataset). The results in Table 7 show that structured pruning with ProsPr can significantly reduce the overall training time.
D RESULTS ON SEGMENTATION TASK
An interesting, albeit less common, application for pruning models is within the context of segmentation. In a recent paper Jeong et al. (2021) train and prune the U-Net (Ronneberger et al., 2015) architecture on two image datasets from the Cell Tracking Challenge (PhC-C2DH-U373 and DICC2DH-HeLa). They use the classic multi-step approach of gradually applying magnitude-pruning interleaved with fine-tuning stages. To evaluate the flexibility of our method we used meta-gradients at the beginning of training (on a randomly initialized U-Net), prune in a single shot, and train the network once for the same number of epochs (50). We kept the training set-up the same as the baseline by Jeong et al. (2021) (i.e., resizing images and segmentation maps to (256,256), setting aside 30% of training data for validation) and similarly aim to find the highest prune ratio that does not result in IOU degradation. We report intersection-over-union (IOU) metric for the two datasets in Tables 8 and 9:
Table 8: Mean-IOU on U373 validation
Method Prune Ratio Mean IOU
Unpruned - 0.9371 Jeong et al. 95% 0.9368 ProsPr 97% 0.9369
Table 9: Mean-IOU on HeLa validation
Method Prune Ratio Mean IOU
Unpruned - 0.7514 Jeong et al. 81.8% 0.7411 ProsPr 90% 0.7491
These results show that our method works as well (or better) compared to this compute-expensive baseline, in the sense that we can prune more parameters while keeping the IOU score the same.
E SELF-SUPERVISED INITIALIZATION
To evaluate the robustness and consistency of our method against non-random initialization we ran experiments using BYOL to learn representations from unlabeled samples (Grill et al., 2020). We used ResNet-18 as a backbone and trained for 1000 epochs with an embedding size of 64. Unlike the vanilla ResNet-18 architecture used in Section 4.3 we used the commonly-used modified version of ResNet-18 for smaller inputs (removing the first pooling layer and modifying the first convolutional layer to have kernel kernel size of 3, stride of 1, and padding size of 1). We then used this trained ResNet18 as the initialization for our meta-gradient pruning method. After the pruning step, all layers were trained as before until convergence. All training hyper-parameters were kept as before. The results (final test accuracies for 95% pruning) are summarized in Table 10.
These results show the robustness of our method for this particular self-supervised initialization. Starting from a learned representation can be challenging because these representations are much closer to weight values at convergence, and therefore the magnitude of their gradients is significantly smaller than randomly initialized weights. However, this is less of a problem for meta-gradients as their magnitude is still significant due to back-propagation through training steps. This can be seen in Figure 3 which shows the L2 norm of gradients of each layer of a BYOL-initialized ResNet-18 for meta-gradients compared to normal gradients. It can be seen that meta-gradients provide a stronger signal compared to normal gradients. | 1. What is the focus of the paper regarding efficient pruning methods?
2. What are the strengths of the proposed approach, particularly in leveraging loss sensitivity?
3. What are the weaknesses of the paper, especially regarding the performance aspect?
4. Do you have any suggestions for further studies to improve the pruning method?
5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Review | Summary Of The Paper
This work proposed a new efficient pruning method, by leveraging both loss sensitivity ("saliency score") during a few initial training steps. The authors leveraged meta-gradients with appropriate approximations to stabilize and speed up the pruning.
Review
Strength: The overall method is well explained and easy to follow.
Weakness: The performance of this work can be decoupled into two aspects:
If the meta-gradients w.r.t. masks (
c
is vital, or single-step gradients (i.e.
M
=
1
) may also work.
If dropping small terms (Eq. 15 to 16) is indeed negligible.
I did not see ablation studies on these two aspects. Therefore I propose some further studies:
If the single-step gradient is used (no meta-gradients) to update the mask with the same total number of iterations (e.g. 1024 steps with a batch size of 256 for ResNet-50), what will be the pruning results? This is actually more GPU-memory efficient since no extra computational graphs are stored. Both "pruning by optimizing the mask" (this work) and magnitude pruning could be studied.
If we do not drop small terms in Eq. 15, what will be the pruning results? Smaller
M
would be acceptable if storing computational graph is GPU-memory consuming.
In short, I think it is important to understand which part is vital to pruning: 1) optimizing the mask or magnitude pruning; 2) mata-gradients or just single-step gradients with more steps; 3) dropping or keeping the small terms. |
ICLR | Title
DINO: A Conditional Energy-Based GAN for Domain Translation
Abstract
Domain translation is the process of transforming data from one domain to another while preserving the common semantics. Some of the most popular domain translation systems are based on conditional generative adversarial networks, which use source domain data to drive the generator and as an input to the discriminator. However, this approach does not enforce the preservation of shared semantics since the conditional input can often be ignored by the discriminator. We propose an alternative method for conditioning and present a new framework, where two networks are simultaneously trained, in a supervised manner, to perform domain translation in opposite directions. Our method is not only better at capturing the shared information between two domains but is more generic and can be applied to a broader range of problems. The proposed framework performs well even in challenging cross-modal translations, such as video-driven speech reconstruction, for which other systems struggle to maintain correspondence.
1 INTRODUCTION
Domain translation methods exploit the information redundancy often found in data from different domains in order to find a mapping between them. Successful applications of domain translation include image style transfer (Zhu et al., 2017a) and speech-enhancement (Pascual et al., 2017). Furthermore, these systems are increasingly being used to translate across modalities in applications such as speech-driven animation (Chung et al., 2017) and caption-based image generation (Reed et al., 2016). Some of the most popular methods for domain translation are based on conditional Generative Adversarial Networks (cGANs) (Mirza & Osindero, 2014). The conditional information in cGANs is used to drive the generation and to enforce the correspondence between condition and sample. Various alternatives have been proposed for how the condition should be included in the discriminator (Miyato & Koyama, 2018; Reed et al., 2016) but the majority of frameworks provide it as an input, hoping that the sample’s correlation with the condition will play a role in distinguishing between synthesized and genuine samples. The main drawback of this approach is that it does not encourage the use of the conditional information and therefore its contribution can be diminished or even ignored. This may lead to samples that are not semantically consistent with the condition.
In this paper, we propose the Dual Inverse Network Optimisation (DINO) framework1 which is based on energy-based GANs (Zhao et al., 2017) and consists of two networks that perform translation in opposite directions as shown in Figure 1. In this framework, one network (Forward network) translates data from the source domain to the target domain while the other (Reverse Network) performs the inverse translation. The Reverse network’s goal is to minimize the reconstruction error for genuine data and to maximize it for generated data. The Forward network aims to produce samples that can be accurately reconstructed back to the source domain by the Reverse Network. Therefore, during training the Forward network is trained as a generator and the Reverse as a discriminator. Since discrimination is based on the ability to recover source domain samples, the Forward network is driven to produce samples that are not only realistic but also preserve the shared semantics. We show that this approach is effective across a broad range of supervised translation problems, capturing the correspondence even when domains are from different modalities (i.e., video-audio). In detail, the contributions of this paper are:
1Source code: https://github.com/DinoMan/DINO
• A domain translation framework, based on a novel conditioning mechanism for energybased GANs, where the adversarial loss is based on the prediction of the condition. • An adaptive method for balancing the Forward and Reverse networks, which makes training
more robust and improves performance. • A method for simultaneously training two networks to perform translation in inverse direc-
tions, which requires fewer parameters than other domain translation methods. • The first end-to-end trainable model for video-driven speech reconstruction capable of pro-
ducing intelligible speech without requiring task-specific losses to enforce correct content.
2 RELATED WORK
Domain translation covers a wide range of problems including image-to-image translation (Isola et al., 2017), caption-based image synthesis (Qiao et al., 2019), and text-to-speech synthesis (Arik et al., 2017). Unsupervised translation methods attempt to find a relationship between domains using unpaired training data. However, finding correspondence without supervision is an ill-posed problem which is why these methods often impose additional constraints on their networks or objectives. The majority of unsupervised methods are applied to image-to-image translation problems. The CoGAN model (Liu & Tuzel, 2016) imposes a weight-sharing constraint on specific layers of two GANs, which are trained to produce samples from different domains. The motivation is that sharing weights in layers associated with high-level features should help preserve the overall structure of the images. This approach is extended in the UNIT framework (Liu et al., 2017), where the generative networks are Variational Autoencoders (VAEs) with a shared latent space. The weight-sharing used in the CoGAN and UNIT frameworks restricts them to problems where both domains are of the same modality. A more generic method of achieving domain-correspondence is presented in the CycleGAN model proposed by Zhu et al. (2017a). The CycleGAN objective includes a cycle-consistency loss to ensure that image translation between two domains is invertible. Recently, Chen et al. (2020) showed that reusing part of the discriminators in CycleGAN as encoders for the generators achieves parameter reduction as well as better results. Although it is possible to apply the cycle consistency loss for cross-modal translation it has not been widely used in such scenarios.
Unlike unsupervised methods, supervised approaches rely on having a one-to-one correspondence between the data from different domains. The Pix2Pix model (Isola et al., 2017) uses cGANs to perform image-to-image translation and has inspired many subsequent works (Zhu et al., 2017a; Wang et al., 2018; Park et al., 2019). Compared to unsupervised methods, supervised approaches have had more success in translating across different modalities. Notable applications include speechdriven facial animation (Vougioukas et al., 2020) and text-to-image synthesis (Reed et al., 2016; Qiao et al., 2019). It is important to note that the adversarial loss in cGANs alone is often not capable of establishing domain correspondence, which is why these approaches also rely on additional reconstruction or perceptual losses (Johnson et al., 2016) in order to accurately capture semantics.
In many scenarios, the relationship between domains is not bijective (e.g. one-to-many mapping) hence it is desirable for translation systems to produce a diverse set of outputs for a given input. Achieving this diversity is a common issue with GAN-based translation systems (Isola et al., 2017; Liu et al., 2017) since they often suffer from mode collapse. The Pix2Pix model (Isola et al., 2017) proposes using dropout in both training and inference stages as a solution to this problem. Another successful approach is to apply the diversity regularisation presented in Yang et al. (2019). Furthermore, many works (Zhu et al., 2017b; Huang et al., 2018; Chang et al., 2018) attempt to solve this issue by enforcing a bijective mapping between the latent space and the target image domain. Finally, adding a reconstruction loss to the objective also discourages mode collapse (Rosca et al., 2017), by requiring that the entire support of the distribution of training images is covered.
2.1 CONDITIONAL GANS
The most common method for conditioning GANs is proposed by Mirza & Osindero (2014) and feeds the conditional information as input to both the generator and the discriminator. Using the condition in the discriminator assumes that the correlation of samples with the condition will be considered when distinguishing between real and fake samples. However, feeding the condition to the discriminator does not guarantee that the correspondence will be captured and could even lead to the condition being ignored by the network. This issue is shared across all methods which use the condition as input to the discriminator (Miyato & Koyama, 2018; Reed et al., 2016). Furthermore, it explains why these models perform well when there is structural similarity between domains (e.g. image-to-image translation) but struggle to maintain semantics in cases where domains are significantly different such as cross-modal applications (e.g. video-to-speech).
Another method presented in Park et al. (2019) proposes generator conditioning through spatiallyadaptive normalisation layers (SPADE). This approach has been used to produce state of the art results in image generation. It should be noted that this approach requires that source domain data be one-hot encoded semantic segmentation maps and is therefore limited to specific image-translation problems (i.e. segmentation maps to texture image translations). More importantly, conditioning of the discriminator is still done by feeding the condition as an input and hence will have similar drawbacks as other cGAN based methods with regards to semantic preservation.
In some cases it is possible to guide the discriminator to learn specific semantics by performing a self-supervised task. An example of this is the discriminator proposed in Vougioukas et al. (2020) which enforces audio-visual synchrony in facial animation by detecting in and out of sync pairs of video and audio. However, this adversarial loss alone can not fully enforce audio-visual synchronization which is why additional reconstruction losses are required. Finally, it is important to note that finding a self-supervised task capable of enforcing the desired semantics is not always possible.
2.2 ENERGY-BASED GANS
Energy-based GANs (Mathieu et al., 2015; Berthelot et al., 2017) use a discriminator D which is an autoencoder. The generator G synthesizes a sample G(z) from a noise sample z ∈ Z . The discriminator output is fed to a loss function L in order to form an energy function LD(·) = L (D(·)). The objective of the discriminator is to minimize the energy assigned to real data x ∈ X and maximize the energy of generated data. The generator has the opposite objective, leading to the following minimax game:
min D max G V (D,G) = LD(x)− LD(G(z)) (1)
The EBGAN model proposed by Mathieu et al. (2015) uses the mean square error (MSE) to measure the reconstruction and a margin loss to limit the penalization for generated samples. The resulting objective thus becomes:
min D max G V (D,G) = ‖D(x)− x‖+max(0,m− ‖D(G(z))−G(z)‖), (2)
The marginm corresponds to the maximum energy that should be assigned to a synthesized sample. Performance depends on the magnitude of the margin, with large values causing instability and small values resulting in mode collapse. For this reason, some approaches (Wang et al., 2017; Mathieu et al., 2015) recommend decaying the margin during training. An alternative approach is proposed by Berthelot et al. (2017) which introduces an equilibrium concept to balance the generator
and discriminator and measure training convergence. Energy-based GANs have been successful in generating high quality images although their use for conditional generation is limited.
3 METHOD
The encoder-decoder structure used in the discriminator of an energy-based GAN gives it the flexibility to perform various regression tasks. The choice of task determines how energy is distributed and can help the network focus on specific characteristics. We propose a conditional version of EBGAN where the generator (Forward network) and discriminator (Reverse network) perform translations in opposite directions. The Reverse network is trained to minimize the reconstruction error for real samples (low energy) and maximize the error for generated samples (high energy). The Forward network aims to produce samples that will be assigned a low energy by the Reverse network. Generated samples that do not preserve the semantics can not be accurately reconstructed back to the source domain and are thus penalized. Given a condition x ∈ X and its corresponding target y ∈ Y and networks F : X → Y and R : Y → X the objective of the DINO framework becomes:
min R max F V (R,F ) = L (R(y), x)−L (R(F (x)), x), (3)
where L (·, ·) is a loss measuring the reconstruction error between two samples. Multiple choices exist for the loss function and their effects are explained in Lecun et al. (2006). We propose using the MSE to measure reconstruction error and a margin loss similar to that used in EBGAN. However, as shown in Mathieu et al. (2015) this method is sensitive to the value of margin parameter m, which must be gradually decayed to avoid instability. We propose using an adaptive method inspired by BEGAN (Berthelot et al., 2017) which is based on maintaining a fixed ratio γ ∈ [0, 1) between the reconstruction of positive and negative samples.
γ = L (R(y), x)
L (R(F (x)), x) (4)
Balancing is achieved using a proportional controller with gain λ. A typical value for the gain is λ = 0.001. The output of the controller kt ∈ [0, 1] determines the amount of emphasis that the Reverse network places on the reconstruction error of generated samples. The balance determines an upper bound for the energy of fake samples, which is a fixed multiple of the energy assigned to real samples. When the generator is producing samples with a low energy they are pushed to this limit faster than when the generator is already producing high-energy samples. Since the ratio of reconstruction errors is kept fixed this limit will decay as the reconstruction error for real samples improves over time. This achieves a similar result to a decaying margin loss without the necessity for a decay schedule. The output of the controller as well as the reconstruction error for real and fake samples during training is shown in Figure 2. We notice that the controller output increases at the start of training in order to push generated samples to a higher energy value and reduces once the limit determined by γ is reached. Although this approach is inspired by BEGAN there are some key differences which prevent the BEGAN from working with the predictive conditioning proposed in this paper. These are discussed in detail in Section A.4 of the appendix.
In practice we find it advantageous to use the margin loss in combination with adaptive balancing. In this case the margin parameter serves as a hard cutoff for the energy of generated samples and
helps stabilize the system at the beginning of training. As training progresses and reconstruction of real samples improves training relies more on the soft limit enforced by the energy balancing mechanism. In this case we can set γ = 0 to fall back to a fixed margin approach. The training objective is shown in Equation 5. When dealing with one-to-many scenarios we find that adding a reconstruction loss to the generator’s objective can help improve sample diversity.
LR = ‖R(y)− x‖+ kt ·max(0,m− ‖R(F (x))− x‖) LF = ‖R(F (x))− x‖ kt+1 = kt + λ · [‖R(y)− x‖ − γ · ‖R(F (x))− x‖]
(5)
3.1 BIDIRECTIONAL TRANSLATION
It is evident from Equation 5 that the two networks have different goals and that only the Forward network will produce realistic translations since the Reverse network is trained only using an MSE loss. This prohibits its use for domain translation and limits it to facilitating the training of the Forward network. For the DINO framework, since the Forward and Reverse network have the same structure we can swap the roles of the networks and retrain to obtain realistic translation in the opposite direction. However, it is also possible to train both networks simultaneously by combining the objectives for both roles (i.e. discriminator and generator). This results in the following zero-sum two player game:
min R max F V (R,F ) = L (R(y), x)−L (R(F (x)), x) + L (F (R(y)), y)−L (F (x), y) (6)
In this game both players have the same goal which is to minimize the reconstruction error for real samples and to maximize it for fake samples while also ensuring that their samples are assigned a low energy by the other player. Each player therefore behaves both as a generator and as a discriminator. However, in practice we find that is difficult for a network to achieve the objectives for both roles, causing instability during training. The work proposed by Chen et al. (2020), where discriminators and generators share encoders, comes to a similar conclusion and proposes decoupling the training for different parts of the networks. This is not possible in our framework since the discriminator for one task is the generator for the other. To solve this problem we propose branching the decoders of the networks to create two heads which are exclusively used for either discrimination or generation. We find empirically that the best performance in our image-to-image experiments is achieved when branching right after the third layer of the decoder. Additionally, the network encoders are frozen during the generation stage. The bidirectional training paradigm is illustrated in Figure 3.
When training network R as a discriminator we use the stream that passes through the discriminative head Rdisc and when training as a generator we use the stream that uses the generative head
Rgen. The same applies for player F and which uses streams Fdisc and Fgen for discrimination and generation, respectively. To maintain balance during training we use a different controller for each player which results the objective shown in Equation 7. The first two terms in each players objective represent the player’s goal as a discriminator and the last term reflects its goal as a generator. LR = L (Rdisc(y), x)− kt ·L (Rdisc(Fgen(x))− x)︸ ︷︷ ︸ discriminator objective +L (Fdisc(Rgen(y)), y)︸ ︷︷ ︸ generator objective LF = L (Fdisc(x), y)− µt ·L (Fdisc(Rgen(y)), y)︸ ︷︷ ︸ discriminator objective +L (Rdisc(Fgen(x)), x)︸ ︷︷ ︸ generator objective
kt+1 = kt + λR · [L (Rdisc(y), x)− γD ·L (Rdisc(Fgen(x)), x)] µt+1 = µt + λF · [L (Fdisc(x), y)− γG ·L (Fdisc(Rgen(y)), y)]
(7)
3.2 COMPARISON WITH OTHER METHODS
As mentioned in section 2.1 the cGAN conditioning mechanism, used in most supervised translation systems, struggles to preserve the shared semantics in cases where there is no structural similarity between domains. The DINO framework attempts to overcome this limitation by using a different paradigm, where the condition is predicted by the discriminator instead of being fed as an additional input, forcing the generator to maintain the common semantics. Our approach is inspired by several semi-supervised training techniques for GANs (Salimans et al., 2016; Odena et al., 2017; Springenberg, 2015), which have showed that specializing the discriminator by performing a classification task adds structure to its latent space and improves the quality of generated samples. However, these approaches are not designed for conditional generation and use classification only as a proxy task. This differs from our approach where discrimination is driven directly by the prediction of condition.
Another advantage of our system stems from its use of an encoder-decoder structure for the Reverse network. This provides flexibility since the Reverse network can be easily adapted to perform a variety of different translation tasks. In contrast, the multi-stream discriminators used in crossmodal cGANs require fusing representations from different streams. The fusion method as well as the stage at which embeddings are fused is an important design decision that must be carefully chosen depending on the task since it can greatly affect the performance of these models.
The objective of the generator in Equation 5 resembles the cycle-consistency loss used in many unsupervised methods such as CycleGAN (Zhu et al., 2017a) and NICE-GAN (Chen et al., 2020). This also bears resemblance to the back-translation used in bidirectional neural machine translation methods (Artetxe et al., 2018; Lample et al., 2018). However, it is important to note that the cycleconsistency loss used in these approaches is not an adversarial loss since it is optimized with respect to both networks’ parameters. The most similar work to ours is MirrorGAN (Qiao et al., 2019), which improves the generation of images through re-description. This model however uses a pretrained network for re-description in addition to an adversarial loss. Compared to all aforementioned approaches the DINO framework is the only one in which the adversarial loss alone can both achieve sample realism while enforcing correspondence. Finally, since our bidirectional framework uses the generators for discrimination it requires far fewer parameters than these approaches.
4 EXPERIMENTS
We evaluate the DINO framework on image-to-image translation since this the most typical application for domain-translation systems. Additionally, we tackle the problem of video-driven speech reconstruction, which involves synthesising intelligible speech from silent video. In all of the experiments focus is placed not only on evaluating the quality of the generated samples but also verifying that the semantics are preserved after translation.
4.1 IMAGE-TO-IMAGE TRANSLATION
The majority of modern domain translation methods have been applied to image-to-image translation problems, since it is common for both domains to share high-level structure and therefore easier to capture their correspondence. We evaluate the DINO framework on the CelebAMask-HQ (Lee et al., 2020) and the Cityscapes (Cordts et al., 2016) datasets, using their recommended training-test splits.
When judging the performance of image-to-image translation systems one must consider multiple factors including the perceptual quality, the semantic consistency and the diversity of the generated images. We therefore rely on a combination of full-reference reconstruction metrics and perceptual metrics for image assessment.
Reconstruction metrics such as the peak signal-to-noise ratio (PSNR) and the structural similarity index (SSIM) measure the deviation of generated images from the ground truth. Although these metrics are good at measuring image distortion they are usually poor indicators of realism and they penalize diversity. For this reason, we also measure the perceptual quality of the images by using the Fréchet Inception Distance (FID), which compares the statistics of the embeddings of real and fake images in order to measure the quality and diversity. Furthermore, we use the cumulative probability blur detection (CPBD) metric (Narvekar & Karam, 2009) to assess image sharpness. Finally, we use pre-trained semantic segmentation models to verify that image semantics are accurately captured in the images. For the CelebAMask-HQ dataset we use the segmentation model from Lee et al. (2020) and for the Cityscapes dataset we use a DeepLabv3+ model (Chen et al., 2018). We report the pixel accuracy as well as the average intersection over union (mIOU).
We compare our method to other supervised image-to-image translation models, such as Pix2Pix2 and BiCycleGAN3. Since DINO is a generic translation method comparing it to translation methods that are tailored to a specific type of translation (Yi et al., 2019) is an unfair comparison since these methods make use of additional information or use task-specific losses. Nevertheless, we present the results for SPADE4 (Park et al., 2019) on the Cityscapes dataset in order to see how well our approach performs compared to state-of-the-art task-specific translation methods. Since the pretrained SPADE model generates images at a resolution of 512× 256 we resize images to 256× 256 for a fair comparison.
When training the DINO model we resize images to 256 × 256 and use a networks with a similar U-Net architecture to the Pix2Pix model to ensure a fair comparison. The architecture of the networks used in these experiments can be found in section A.1.1 of the appendix. Additionally, like Pix2Pix we use an additional L1 loss to train the Forward network (generator), which helps improve image diversity. The balance parameter γ is set to 0.8 for image-to-image translation experiments. We train using the Adam optimizer (Kingma & Ba, 2015), with a learning rate of 0.0002, and momentum parameters β1 = 0.5, β2 = 0.999. The quantitative evaluation on the CelebAMask-HQ and Cityscapes datasets is shown in Tables 1 and 2. Qualitative results are presented in Section A.5.1 of the appendix.
The results on Tables 1 and 2 show that our method outperforms the Pix2Pix and BicycleGAN models both in terms of perceptual quality as well as reconstruction error. More importantly, our approach is better at preserving the image semantics as indicated by the higher pixel accuracy and mIoU. We notice that for the CelebAMask-HQ dataset the segmentation accuracy is better for generated images than for real images. This phenomenon is due to some inconsistent labelling and is explained in Section A.2 of the appendix. We also note that the bidirectional DINO framework can simultaneously train two networks to perform translation in both directions without a sacrificing quality and with fewer parameters. Finally, an ablation study for our model is performed in A.3 of the appendix.
When comparing our results to those achieved by the SPADE network on the Cityscapes dataset we notice that our model performs similarly, achieving slightly better performance on reconstruction metrics (PSNR, SSIM) and slightly worse performance for preserving the image semantics. This is expected since the SPADE model has been specifically designed for translation from segmentation maps to images. Furthermore, the networks used in these experiments, for the DINO framework, are far simpler (37 million parameters in the generator compared to 97 million). More importantly, unlike SPADE our network can be applied to any task and perform the translation in both directions.
4.2 VIDEO-DRIVEN SPEECH RECONSTRUCTION
Many problems require finding a mapping between signals from different modalities (e.g. speechdriven facial animation, caption-based image generation). This is far more challenging than imageto-image translation since signals from different modalities do not have structural similarities, making it difficult to capture their correspondence. We evaluate the performance of our method on video-driven speech reconstruction, which involves synthesising speech from a silent video. This is a notoriously difficult problem due to ambiguity which is attributed to the existence of homophenous words. Another reason for choosing this problem is that common reconstruction losses (e.g. L1, MSE), which are typically used in image-to-image translation to enforce low-frequency correctness (Isola et al., 2017) are not helpful for the generation of raw waveforms. This means that methods must rely only on the conditional adversarial loss to enforce semantic consistency.
We show that the DINO framework can synthesize intelligible speech from silent video using only the adversarial loss described in Equation 5. Adjusting the framework for this task requires using encoders and decoders that can handle audio and video as shown in Figure 4. The Forward network transforms a sequence of video frames centered around the mouth to its corresponding waveform. The Reverse network is fed a waveform and the initial video frame to produce a video sequence of the speaker. The initial frame is provided to enforce the speaker identity and ensures that the reconstruction error will be based on the facial animation and not on any differences in appearance. This forces the network to focus on capturing the content of the speech and not the speaker’s identity.
Experiments are performed on the GRID dataset (Cooke et al., 2006), which contains short phrases spoken by 33 speakers. There are 1000 phrases per speaker, each containing 6 words from a vocabulary of 51 words. The data is split according to Vougioukas et al. (2019) so that the test set contains unseen speakers and phrases. As baselines for comparison we use a conditional version of WaveGAN (Donahue et al., 2019) and a CycleGAN framework adapted for video-to-audio translation. Additionally, we compare with the model proposed by Vougioukas et al. (2019), which is designed for video-driven speech reconstruction and uses a perceptual loss to accurately capture the spoken content. An Adam optimiser is used with a learning rate of 0.0001 for the video-to-audio network and a learning rate of 0.001 for the audio-to-video network. The balancing parameter γ is set to 0.5.
We evaluate the quality of the synthesized audio based on intelligibility and spoken word accuracy. We measure speech quality using the mean Mel Cepstral Distance (MCD) (Kubichek, 1993), which measures the distance between two signals in the mel-frequency cepstrum and is often used to assess synthesized speech. Furthermore, we use the Short-Time Objective Intelligibility (STOI) (Taal et al., 2011) and Perceptual Evaluation of Speech Quality (PESQ) (Rix et al., 2001) metrics, which measure the intelligibility of the synthesized audio. Finally, in order to verify the semantic consistency of the spoken message we use a pretrained automatic speech recognition (ASR) model and measure the Word Error Rate (WER). The results for the speech-reconstruction task are shown in Table 3.
The results of Table 3 show that our method is capable of producing intelligible speech and achieving similar performance to the model proposed by Vougioukas et al. (2019). Furthermore, the large WER for both baselines highlights the limitations of cGANs and CycleGANs for cross-modal translation. Although our approach is better at capturing the content and audio-visual correspondence, we notice that samples all share the same robotic voice compared to the other methods. This is expected since discrimination using our approach focuses mostly on audio-visual correspondence and not capturing the speaker identity. Examples of synthesized waveforms and their spectrograms are shown in Section A.6 of the appendix and samples are provided in the supplementary material.
Ethical considerations: We have tested the DINO model on this task as an academic investigation to test its ability to capture common semantics even across modalities. Video-driven speech reconstruction has many practical applications especially in digital communications. It enables videoconferencing in noisy or silent environments and can improve hearing-assistive devices. However, this technology can potentially be used in surveillance systems which raises privacy concerns. Therefore, although we believe that this topic is worth exploring, future researchers should be careful when developing features that will enable this technology to be used for surveillance purposes.
5 CONCLUSIONS
In this paper we have presented a domain translation framework, based on predictive conditioning. Unlike other conditional approaches, predicting the condition forces the discriminator to learn the the relationship between domains and ensures that the generated samples preserve cross-domain semantics. The results on image-to-image translation verify that our approach is capable of producing sharp and realistic images while strongly enforcing semantic correspondence between domains. Furthermore, results on video-driven speech reconstruction show that our method is applicable to a wide range of problems and that correspondence can be maintained even when translating across different modalities. Finally, we present a method for bidirectional translation and show that it achieves the same performance while reducing the number of training parameters compared to other models.
A APPENDIX
A.1 NETWORK ARCHITECTURE
A.1.1 IMAGE-TO-IMAGE TRANSLATION
This section describes the network architecture used for the image-to-image translation experiments in Section 4.1. The two networks used in the DINO framework are identical and both use a U-Net encoder-decoder architecture similar to that used in Pix2Pix (Isola et al., 2017). The encoder is a 7-layer Convolutional Neural Network (CNN) made of strided 2D convolutions. The decoder is a 12-layer CNN made of 2D convolutions and up-sampling layers. We use Instance Normalization (Ulyanov et al., 2016), which has been shown to work well in style transfer applications. The network is shown in detail in Figure 5.
A.1.2 VIDEO-DRIVEN SPEECH RECONSTRUCTION
This section describes the architecture of the networks used for video-driven speech reconstruction in the experiments of Section 4.2. In this scenario the Forward network synthesizes speech and the Reverse network performs speech-driven facial animation. The Forward network is made up of a Video Encoder, a single-layer GRU and an Audio Decoder. The video sequence is fed to the Video Encoder, which uses spatio-temporal convolutions to produce an embedding per video frame. The embeddings are fed to a single-layer GRU to create a coherent sequence of representations which is then passed to an Audio Decoder network which will produce 640 audio samples per embedding. Concatenating these chunks of samples without overlap forms a waveform. Both the Video Encoder and Audio Decoder are fully convolutional networks, with the Audio Decoder using an additional self-attention layer (Zhang et al., 2019) before the last layer as shown in Figure 6.
The Reverse network is made up of two encoders responsible for capturing the speaker identity and content. The content stream uses a sliding window approach to create a sequence of embeddings for the audio using an Audio Encoder and a 2-layer GRU. The identity stream consists of an Identity Encoder which captures the identity of the person and enforces it on the generated video. The two embeddings are concatenated and fed to a Frame Decoder which produces a video sequence. Skip connections between the Identity Encoder and Frame Decoder ensure that the face is accurately reconstructed. A detailed illustration of the Reverse network is shown in Figure 7.
A.2 CELEBA SEGMENTATION
In Table 1 we notice that the segmentation evaluation on generated images surpasses that of real images. The reason for this are some inconsistencies in the labelled images. Examples in Figure 8
show that in these cases some objects are labeled despite being occluded in the real image. However, these objects will appear in the generated images since the labelled images are used to drive their generation. These small inconsistencies in the data annotations explain why segmentation is slightly better for synthesized samples.
A.3 ABLATION STUDY
In order to measure the effect of the reconstruction loss and adaptive balancing used in the DINO framework we perform an ablation study on the CelebAMask-HQ dataset. The results of the study are shown in Table 4. As expected the addition of the L1 loss results in a higher PSNR and SSIM since these metrics depend on the reconstruction error, which is directly optimised by this loss. More importantly, we note that the addition of the L1 loss improves the FID score since it prevents mode collapse. This is evident when observing the examples shown in Figure 9, which shows that modedropping that occurs in both DINO and Pix2Pix when this loss is omitted. Finally, we notice that the adaptive balancing used in DINO allows for more stable training and improves performance, which is reflected across all metrics.
A.4 ADAPTIVE BALANCING
As mentioned in Section 3 DINO uses a controller to ensure that the energy of generated samples is always a fixed multiple of the energy of real samples. Although this approach is similar to that used by BEGAN (Berthelot et al., 2017) there is a key difference. BEGANs perform autoencoding and therefore assume that the discriminator’s reconstruction error will be larger for real samples since they have more details which are harder to reconstruct. For this reason, the controller used by BEGAN tries to maintain a balance throughout training where L(xreal) > L(xfake). In the DINO framework the discriminator performs domain translation therefore it is natural to assume that real samples should produce better reconstructions since they contain useful information regarding the semantics. For this reason we choose to maintain a balance where L(xfake) > L(xreal). This is reflected in the controller update as well as the balance parameter of DINO which is the inverse of that used in BEGANs.
As we mentioned the core difference with the adaptive balancing in BEGAN is that DINO maintains a balance where L(xfake) > L(xreal) whereas BEGAN maintains a balance for which L(xreal) > L(xfake). This makes BEGAN unsuitable for use with the predictive conditioning proposed in this paper since it allows the generator to “hide” information about the condition in synthesized samples. The generator thus tricks the discriminator into producing a much better reconstruction of the condition for fake samples without the need for them to be realistic. Since the controller prevents the discriminator from pushing fake samples to a higher energy than real samples (i.e. the controller output is zero when fake samples have higher energy) this behaviour is not prohibited by BEGANs throughout training.
The method used in DINO however does not have this problem since it encourages the discriminator to assign higher energies to unrealistic samples thus penalizing them and preventing the generator from “cheating” in the same way as BEGAN. To show this effect we train a conditional BEGAN
and the DINO framework to perform translation from photo to sketch using the APDrawings dataset from (Yi et al., 2019). Figure 10 shows how the balancing used in DINO allows the network to penalize unrealistic images by encouraging the discriminator to assign to them energies larger than the real samples. We note that this problem occurs only in cases where the source domain is more informative than the target domain (i.e. photo → sketch). This does not occur in cases where the source domain in more generic than the target domain (i.e. segmentation map→ photo)
A.5 QUALITATIVE RESULTS
A.5.1 IMAGE-TO-IMAGE TRANSLATION
CelebAMask-HQ
Examples of image-to-image translation from segmentation maps to photos for the CelebMask-HQ dataset are shown in Figure 11. We note that our approach is able to maintain semantics and produce realistic results even in cases with extreme head poses and facial expressions.
Cityscapes
Examples of image-to-image translation from segmentation maps to photos for the Cityscapes dataset are shown in Figure 12.
A.6 VIDEO-TO-SPEECH TRANSLATION
This section presents examples of waveforms produced by the methods compared in Table 3. In addition to the waveforms we also present their corresponding spectrograms. The waveforms and spectrograms are shown in Figure 13. It is evident from the shape of the waveform that our method more accurately captures voiced sections in the audio. Furthermore, the spectrogram of our method is closely resembles that of the ground truth although some high frequency components are not captured. The performance is similar to the Perceptual GAN proposed by Vougioukas et al. (2019) although our method relies on only an adversarial loss. | 1. What is the focus of the paper, and what are the proposed approaches?
2. What are the strengths of the proposed method, particularly in its objectives and experiments?
3. What are the weaknesses of the paper regarding the omission of important details and inconsistent results?
4. Do you have any questions regarding the adaptation balance mechanism? | Review | Review
The paper proposes an adversarial framework DINO to train translation models from source to target and target to source. The basic idea is to replace generator and discriminator in the energy based GAN with two source-to-target generation models. The discriminator(reverse generator) and the generator competes in a minimax game to reconstruct the data. The framework is further extended with duplicate output heads for both discriminator and generator to enhance the training robustness. The authors evaluated their framework on two tasks: image to image translation and silent-video to speech reconstruction. The DINO method impressive improvement in both tasks.
Strong points:
The proposed DINO framework is well motivated. The objectives in DINO are reasonable and novel.
Experiments on both image to image translation and video to speech reconstruction verify the DINO method achieves significant improvement comparing with other translation methods.
Weak points:
Important details are omitted in image-to-image translation and video to speech reconstruction. It is unclear about the backbone network as well as the parameter setup. Therefore, it is impossible to reproduce the method.
DINO and DINO(bidirectional) are not consistent winners. It is not explained or analyzed why DINO sometimes wins while DINO(bidirectional) wins otherwise. There is no recommendation for practical use either.
The adaptive balancing seems reasonable. But it is not studied in the experiment whether it improves the training. |
ICLR | Title
DINO: A Conditional Energy-Based GAN for Domain Translation
Abstract
Domain translation is the process of transforming data from one domain to another while preserving the common semantics. Some of the most popular domain translation systems are based on conditional generative adversarial networks, which use source domain data to drive the generator and as an input to the discriminator. However, this approach does not enforce the preservation of shared semantics since the conditional input can often be ignored by the discriminator. We propose an alternative method for conditioning and present a new framework, where two networks are simultaneously trained, in a supervised manner, to perform domain translation in opposite directions. Our method is not only better at capturing the shared information between two domains but is more generic and can be applied to a broader range of problems. The proposed framework performs well even in challenging cross-modal translations, such as video-driven speech reconstruction, for which other systems struggle to maintain correspondence.
1 INTRODUCTION
Domain translation methods exploit the information redundancy often found in data from different domains in order to find a mapping between them. Successful applications of domain translation include image style transfer (Zhu et al., 2017a) and speech-enhancement (Pascual et al., 2017). Furthermore, these systems are increasingly being used to translate across modalities in applications such as speech-driven animation (Chung et al., 2017) and caption-based image generation (Reed et al., 2016). Some of the most popular methods for domain translation are based on conditional Generative Adversarial Networks (cGANs) (Mirza & Osindero, 2014). The conditional information in cGANs is used to drive the generation and to enforce the correspondence between condition and sample. Various alternatives have been proposed for how the condition should be included in the discriminator (Miyato & Koyama, 2018; Reed et al., 2016) but the majority of frameworks provide it as an input, hoping that the sample’s correlation with the condition will play a role in distinguishing between synthesized and genuine samples. The main drawback of this approach is that it does not encourage the use of the conditional information and therefore its contribution can be diminished or even ignored. This may lead to samples that are not semantically consistent with the condition.
In this paper, we propose the Dual Inverse Network Optimisation (DINO) framework1 which is based on energy-based GANs (Zhao et al., 2017) and consists of two networks that perform translation in opposite directions as shown in Figure 1. In this framework, one network (Forward network) translates data from the source domain to the target domain while the other (Reverse Network) performs the inverse translation. The Reverse network’s goal is to minimize the reconstruction error for genuine data and to maximize it for generated data. The Forward network aims to produce samples that can be accurately reconstructed back to the source domain by the Reverse Network. Therefore, during training the Forward network is trained as a generator and the Reverse as a discriminator. Since discrimination is based on the ability to recover source domain samples, the Forward network is driven to produce samples that are not only realistic but also preserve the shared semantics. We show that this approach is effective across a broad range of supervised translation problems, capturing the correspondence even when domains are from different modalities (i.e., video-audio). In detail, the contributions of this paper are:
1Source code: https://github.com/DinoMan/DINO
• A domain translation framework, based on a novel conditioning mechanism for energybased GANs, where the adversarial loss is based on the prediction of the condition. • An adaptive method for balancing the Forward and Reverse networks, which makes training
more robust and improves performance. • A method for simultaneously training two networks to perform translation in inverse direc-
tions, which requires fewer parameters than other domain translation methods. • The first end-to-end trainable model for video-driven speech reconstruction capable of pro-
ducing intelligible speech without requiring task-specific losses to enforce correct content.
2 RELATED WORK
Domain translation covers a wide range of problems including image-to-image translation (Isola et al., 2017), caption-based image synthesis (Qiao et al., 2019), and text-to-speech synthesis (Arik et al., 2017). Unsupervised translation methods attempt to find a relationship between domains using unpaired training data. However, finding correspondence without supervision is an ill-posed problem which is why these methods often impose additional constraints on their networks or objectives. The majority of unsupervised methods are applied to image-to-image translation problems. The CoGAN model (Liu & Tuzel, 2016) imposes a weight-sharing constraint on specific layers of two GANs, which are trained to produce samples from different domains. The motivation is that sharing weights in layers associated with high-level features should help preserve the overall structure of the images. This approach is extended in the UNIT framework (Liu et al., 2017), where the generative networks are Variational Autoencoders (VAEs) with a shared latent space. The weight-sharing used in the CoGAN and UNIT frameworks restricts them to problems where both domains are of the same modality. A more generic method of achieving domain-correspondence is presented in the CycleGAN model proposed by Zhu et al. (2017a). The CycleGAN objective includes a cycle-consistency loss to ensure that image translation between two domains is invertible. Recently, Chen et al. (2020) showed that reusing part of the discriminators in CycleGAN as encoders for the generators achieves parameter reduction as well as better results. Although it is possible to apply the cycle consistency loss for cross-modal translation it has not been widely used in such scenarios.
Unlike unsupervised methods, supervised approaches rely on having a one-to-one correspondence between the data from different domains. The Pix2Pix model (Isola et al., 2017) uses cGANs to perform image-to-image translation and has inspired many subsequent works (Zhu et al., 2017a; Wang et al., 2018; Park et al., 2019). Compared to unsupervised methods, supervised approaches have had more success in translating across different modalities. Notable applications include speechdriven facial animation (Vougioukas et al., 2020) and text-to-image synthesis (Reed et al., 2016; Qiao et al., 2019). It is important to note that the adversarial loss in cGANs alone is often not capable of establishing domain correspondence, which is why these approaches also rely on additional reconstruction or perceptual losses (Johnson et al., 2016) in order to accurately capture semantics.
In many scenarios, the relationship between domains is not bijective (e.g. one-to-many mapping) hence it is desirable for translation systems to produce a diverse set of outputs for a given input. Achieving this diversity is a common issue with GAN-based translation systems (Isola et al., 2017; Liu et al., 2017) since they often suffer from mode collapse. The Pix2Pix model (Isola et al., 2017) proposes using dropout in both training and inference stages as a solution to this problem. Another successful approach is to apply the diversity regularisation presented in Yang et al. (2019). Furthermore, many works (Zhu et al., 2017b; Huang et al., 2018; Chang et al., 2018) attempt to solve this issue by enforcing a bijective mapping between the latent space and the target image domain. Finally, adding a reconstruction loss to the objective also discourages mode collapse (Rosca et al., 2017), by requiring that the entire support of the distribution of training images is covered.
2.1 CONDITIONAL GANS
The most common method for conditioning GANs is proposed by Mirza & Osindero (2014) and feeds the conditional information as input to both the generator and the discriminator. Using the condition in the discriminator assumes that the correlation of samples with the condition will be considered when distinguishing between real and fake samples. However, feeding the condition to the discriminator does not guarantee that the correspondence will be captured and could even lead to the condition being ignored by the network. This issue is shared across all methods which use the condition as input to the discriminator (Miyato & Koyama, 2018; Reed et al., 2016). Furthermore, it explains why these models perform well when there is structural similarity between domains (e.g. image-to-image translation) but struggle to maintain semantics in cases where domains are significantly different such as cross-modal applications (e.g. video-to-speech).
Another method presented in Park et al. (2019) proposes generator conditioning through spatiallyadaptive normalisation layers (SPADE). This approach has been used to produce state of the art results in image generation. It should be noted that this approach requires that source domain data be one-hot encoded semantic segmentation maps and is therefore limited to specific image-translation problems (i.e. segmentation maps to texture image translations). More importantly, conditioning of the discriminator is still done by feeding the condition as an input and hence will have similar drawbacks as other cGAN based methods with regards to semantic preservation.
In some cases it is possible to guide the discriminator to learn specific semantics by performing a self-supervised task. An example of this is the discriminator proposed in Vougioukas et al. (2020) which enforces audio-visual synchrony in facial animation by detecting in and out of sync pairs of video and audio. However, this adversarial loss alone can not fully enforce audio-visual synchronization which is why additional reconstruction losses are required. Finally, it is important to note that finding a self-supervised task capable of enforcing the desired semantics is not always possible.
2.2 ENERGY-BASED GANS
Energy-based GANs (Mathieu et al., 2015; Berthelot et al., 2017) use a discriminator D which is an autoencoder. The generator G synthesizes a sample G(z) from a noise sample z ∈ Z . The discriminator output is fed to a loss function L in order to form an energy function LD(·) = L (D(·)). The objective of the discriminator is to minimize the energy assigned to real data x ∈ X and maximize the energy of generated data. The generator has the opposite objective, leading to the following minimax game:
min D max G V (D,G) = LD(x)− LD(G(z)) (1)
The EBGAN model proposed by Mathieu et al. (2015) uses the mean square error (MSE) to measure the reconstruction and a margin loss to limit the penalization for generated samples. The resulting objective thus becomes:
min D max G V (D,G) = ‖D(x)− x‖+max(0,m− ‖D(G(z))−G(z)‖), (2)
The marginm corresponds to the maximum energy that should be assigned to a synthesized sample. Performance depends on the magnitude of the margin, with large values causing instability and small values resulting in mode collapse. For this reason, some approaches (Wang et al., 2017; Mathieu et al., 2015) recommend decaying the margin during training. An alternative approach is proposed by Berthelot et al. (2017) which introduces an equilibrium concept to balance the generator
and discriminator and measure training convergence. Energy-based GANs have been successful in generating high quality images although their use for conditional generation is limited.
3 METHOD
The encoder-decoder structure used in the discriminator of an energy-based GAN gives it the flexibility to perform various regression tasks. The choice of task determines how energy is distributed and can help the network focus on specific characteristics. We propose a conditional version of EBGAN where the generator (Forward network) and discriminator (Reverse network) perform translations in opposite directions. The Reverse network is trained to minimize the reconstruction error for real samples (low energy) and maximize the error for generated samples (high energy). The Forward network aims to produce samples that will be assigned a low energy by the Reverse network. Generated samples that do not preserve the semantics can not be accurately reconstructed back to the source domain and are thus penalized. Given a condition x ∈ X and its corresponding target y ∈ Y and networks F : X → Y and R : Y → X the objective of the DINO framework becomes:
min R max F V (R,F ) = L (R(y), x)−L (R(F (x)), x), (3)
where L (·, ·) is a loss measuring the reconstruction error between two samples. Multiple choices exist for the loss function and their effects are explained in Lecun et al. (2006). We propose using the MSE to measure reconstruction error and a margin loss similar to that used in EBGAN. However, as shown in Mathieu et al. (2015) this method is sensitive to the value of margin parameter m, which must be gradually decayed to avoid instability. We propose using an adaptive method inspired by BEGAN (Berthelot et al., 2017) which is based on maintaining a fixed ratio γ ∈ [0, 1) between the reconstruction of positive and negative samples.
γ = L (R(y), x)
L (R(F (x)), x) (4)
Balancing is achieved using a proportional controller with gain λ. A typical value for the gain is λ = 0.001. The output of the controller kt ∈ [0, 1] determines the amount of emphasis that the Reverse network places on the reconstruction error of generated samples. The balance determines an upper bound for the energy of fake samples, which is a fixed multiple of the energy assigned to real samples. When the generator is producing samples with a low energy they are pushed to this limit faster than when the generator is already producing high-energy samples. Since the ratio of reconstruction errors is kept fixed this limit will decay as the reconstruction error for real samples improves over time. This achieves a similar result to a decaying margin loss without the necessity for a decay schedule. The output of the controller as well as the reconstruction error for real and fake samples during training is shown in Figure 2. We notice that the controller output increases at the start of training in order to push generated samples to a higher energy value and reduces once the limit determined by γ is reached. Although this approach is inspired by BEGAN there are some key differences which prevent the BEGAN from working with the predictive conditioning proposed in this paper. These are discussed in detail in Section A.4 of the appendix.
In practice we find it advantageous to use the margin loss in combination with adaptive balancing. In this case the margin parameter serves as a hard cutoff for the energy of generated samples and
helps stabilize the system at the beginning of training. As training progresses and reconstruction of real samples improves training relies more on the soft limit enforced by the energy balancing mechanism. In this case we can set γ = 0 to fall back to a fixed margin approach. The training objective is shown in Equation 5. When dealing with one-to-many scenarios we find that adding a reconstruction loss to the generator’s objective can help improve sample diversity.
LR = ‖R(y)− x‖+ kt ·max(0,m− ‖R(F (x))− x‖) LF = ‖R(F (x))− x‖ kt+1 = kt + λ · [‖R(y)− x‖ − γ · ‖R(F (x))− x‖]
(5)
3.1 BIDIRECTIONAL TRANSLATION
It is evident from Equation 5 that the two networks have different goals and that only the Forward network will produce realistic translations since the Reverse network is trained only using an MSE loss. This prohibits its use for domain translation and limits it to facilitating the training of the Forward network. For the DINO framework, since the Forward and Reverse network have the same structure we can swap the roles of the networks and retrain to obtain realistic translation in the opposite direction. However, it is also possible to train both networks simultaneously by combining the objectives for both roles (i.e. discriminator and generator). This results in the following zero-sum two player game:
min R max F V (R,F ) = L (R(y), x)−L (R(F (x)), x) + L (F (R(y)), y)−L (F (x), y) (6)
In this game both players have the same goal which is to minimize the reconstruction error for real samples and to maximize it for fake samples while also ensuring that their samples are assigned a low energy by the other player. Each player therefore behaves both as a generator and as a discriminator. However, in practice we find that is difficult for a network to achieve the objectives for both roles, causing instability during training. The work proposed by Chen et al. (2020), where discriminators and generators share encoders, comes to a similar conclusion and proposes decoupling the training for different parts of the networks. This is not possible in our framework since the discriminator for one task is the generator for the other. To solve this problem we propose branching the decoders of the networks to create two heads which are exclusively used for either discrimination or generation. We find empirically that the best performance in our image-to-image experiments is achieved when branching right after the third layer of the decoder. Additionally, the network encoders are frozen during the generation stage. The bidirectional training paradigm is illustrated in Figure 3.
When training network R as a discriminator we use the stream that passes through the discriminative head Rdisc and when training as a generator we use the stream that uses the generative head
Rgen. The same applies for player F and which uses streams Fdisc and Fgen for discrimination and generation, respectively. To maintain balance during training we use a different controller for each player which results the objective shown in Equation 7. The first two terms in each players objective represent the player’s goal as a discriminator and the last term reflects its goal as a generator. LR = L (Rdisc(y), x)− kt ·L (Rdisc(Fgen(x))− x)︸ ︷︷ ︸ discriminator objective +L (Fdisc(Rgen(y)), y)︸ ︷︷ ︸ generator objective LF = L (Fdisc(x), y)− µt ·L (Fdisc(Rgen(y)), y)︸ ︷︷ ︸ discriminator objective +L (Rdisc(Fgen(x)), x)︸ ︷︷ ︸ generator objective
kt+1 = kt + λR · [L (Rdisc(y), x)− γD ·L (Rdisc(Fgen(x)), x)] µt+1 = µt + λF · [L (Fdisc(x), y)− γG ·L (Fdisc(Rgen(y)), y)]
(7)
3.2 COMPARISON WITH OTHER METHODS
As mentioned in section 2.1 the cGAN conditioning mechanism, used in most supervised translation systems, struggles to preserve the shared semantics in cases where there is no structural similarity between domains. The DINO framework attempts to overcome this limitation by using a different paradigm, where the condition is predicted by the discriminator instead of being fed as an additional input, forcing the generator to maintain the common semantics. Our approach is inspired by several semi-supervised training techniques for GANs (Salimans et al., 2016; Odena et al., 2017; Springenberg, 2015), which have showed that specializing the discriminator by performing a classification task adds structure to its latent space and improves the quality of generated samples. However, these approaches are not designed for conditional generation and use classification only as a proxy task. This differs from our approach where discrimination is driven directly by the prediction of condition.
Another advantage of our system stems from its use of an encoder-decoder structure for the Reverse network. This provides flexibility since the Reverse network can be easily adapted to perform a variety of different translation tasks. In contrast, the multi-stream discriminators used in crossmodal cGANs require fusing representations from different streams. The fusion method as well as the stage at which embeddings are fused is an important design decision that must be carefully chosen depending on the task since it can greatly affect the performance of these models.
The objective of the generator in Equation 5 resembles the cycle-consistency loss used in many unsupervised methods such as CycleGAN (Zhu et al., 2017a) and NICE-GAN (Chen et al., 2020). This also bears resemblance to the back-translation used in bidirectional neural machine translation methods (Artetxe et al., 2018; Lample et al., 2018). However, it is important to note that the cycleconsistency loss used in these approaches is not an adversarial loss since it is optimized with respect to both networks’ parameters. The most similar work to ours is MirrorGAN (Qiao et al., 2019), which improves the generation of images through re-description. This model however uses a pretrained network for re-description in addition to an adversarial loss. Compared to all aforementioned approaches the DINO framework is the only one in which the adversarial loss alone can both achieve sample realism while enforcing correspondence. Finally, since our bidirectional framework uses the generators for discrimination it requires far fewer parameters than these approaches.
4 EXPERIMENTS
We evaluate the DINO framework on image-to-image translation since this the most typical application for domain-translation systems. Additionally, we tackle the problem of video-driven speech reconstruction, which involves synthesising intelligible speech from silent video. In all of the experiments focus is placed not only on evaluating the quality of the generated samples but also verifying that the semantics are preserved after translation.
4.1 IMAGE-TO-IMAGE TRANSLATION
The majority of modern domain translation methods have been applied to image-to-image translation problems, since it is common for both domains to share high-level structure and therefore easier to capture their correspondence. We evaluate the DINO framework on the CelebAMask-HQ (Lee et al., 2020) and the Cityscapes (Cordts et al., 2016) datasets, using their recommended training-test splits.
When judging the performance of image-to-image translation systems one must consider multiple factors including the perceptual quality, the semantic consistency and the diversity of the generated images. We therefore rely on a combination of full-reference reconstruction metrics and perceptual metrics for image assessment.
Reconstruction metrics such as the peak signal-to-noise ratio (PSNR) and the structural similarity index (SSIM) measure the deviation of generated images from the ground truth. Although these metrics are good at measuring image distortion they are usually poor indicators of realism and they penalize diversity. For this reason, we also measure the perceptual quality of the images by using the Fréchet Inception Distance (FID), which compares the statistics of the embeddings of real and fake images in order to measure the quality and diversity. Furthermore, we use the cumulative probability blur detection (CPBD) metric (Narvekar & Karam, 2009) to assess image sharpness. Finally, we use pre-trained semantic segmentation models to verify that image semantics are accurately captured in the images. For the CelebAMask-HQ dataset we use the segmentation model from Lee et al. (2020) and for the Cityscapes dataset we use a DeepLabv3+ model (Chen et al., 2018). We report the pixel accuracy as well as the average intersection over union (mIOU).
We compare our method to other supervised image-to-image translation models, such as Pix2Pix2 and BiCycleGAN3. Since DINO is a generic translation method comparing it to translation methods that are tailored to a specific type of translation (Yi et al., 2019) is an unfair comparison since these methods make use of additional information or use task-specific losses. Nevertheless, we present the results for SPADE4 (Park et al., 2019) on the Cityscapes dataset in order to see how well our approach performs compared to state-of-the-art task-specific translation methods. Since the pretrained SPADE model generates images at a resolution of 512× 256 we resize images to 256× 256 for a fair comparison.
When training the DINO model we resize images to 256 × 256 and use a networks with a similar U-Net architecture to the Pix2Pix model to ensure a fair comparison. The architecture of the networks used in these experiments can be found in section A.1.1 of the appendix. Additionally, like Pix2Pix we use an additional L1 loss to train the Forward network (generator), which helps improve image diversity. The balance parameter γ is set to 0.8 for image-to-image translation experiments. We train using the Adam optimizer (Kingma & Ba, 2015), with a learning rate of 0.0002, and momentum parameters β1 = 0.5, β2 = 0.999. The quantitative evaluation on the CelebAMask-HQ and Cityscapes datasets is shown in Tables 1 and 2. Qualitative results are presented in Section A.5.1 of the appendix.
The results on Tables 1 and 2 show that our method outperforms the Pix2Pix and BicycleGAN models both in terms of perceptual quality as well as reconstruction error. More importantly, our approach is better at preserving the image semantics as indicated by the higher pixel accuracy and mIoU. We notice that for the CelebAMask-HQ dataset the segmentation accuracy is better for generated images than for real images. This phenomenon is due to some inconsistent labelling and is explained in Section A.2 of the appendix. We also note that the bidirectional DINO framework can simultaneously train two networks to perform translation in both directions without a sacrificing quality and with fewer parameters. Finally, an ablation study for our model is performed in A.3 of the appendix.
When comparing our results to those achieved by the SPADE network on the Cityscapes dataset we notice that our model performs similarly, achieving slightly better performance on reconstruction metrics (PSNR, SSIM) and slightly worse performance for preserving the image semantics. This is expected since the SPADE model has been specifically designed for translation from segmentation maps to images. Furthermore, the networks used in these experiments, for the DINO framework, are far simpler (37 million parameters in the generator compared to 97 million). More importantly, unlike SPADE our network can be applied to any task and perform the translation in both directions.
4.2 VIDEO-DRIVEN SPEECH RECONSTRUCTION
Many problems require finding a mapping between signals from different modalities (e.g. speechdriven facial animation, caption-based image generation). This is far more challenging than imageto-image translation since signals from different modalities do not have structural similarities, making it difficult to capture their correspondence. We evaluate the performance of our method on video-driven speech reconstruction, which involves synthesising speech from a silent video. This is a notoriously difficult problem due to ambiguity which is attributed to the existence of homophenous words. Another reason for choosing this problem is that common reconstruction losses (e.g. L1, MSE), which are typically used in image-to-image translation to enforce low-frequency correctness (Isola et al., 2017) are not helpful for the generation of raw waveforms. This means that methods must rely only on the conditional adversarial loss to enforce semantic consistency.
We show that the DINO framework can synthesize intelligible speech from silent video using only the adversarial loss described in Equation 5. Adjusting the framework for this task requires using encoders and decoders that can handle audio and video as shown in Figure 4. The Forward network transforms a sequence of video frames centered around the mouth to its corresponding waveform. The Reverse network is fed a waveform and the initial video frame to produce a video sequence of the speaker. The initial frame is provided to enforce the speaker identity and ensures that the reconstruction error will be based on the facial animation and not on any differences in appearance. This forces the network to focus on capturing the content of the speech and not the speaker’s identity.
Experiments are performed on the GRID dataset (Cooke et al., 2006), which contains short phrases spoken by 33 speakers. There are 1000 phrases per speaker, each containing 6 words from a vocabulary of 51 words. The data is split according to Vougioukas et al. (2019) so that the test set contains unseen speakers and phrases. As baselines for comparison we use a conditional version of WaveGAN (Donahue et al., 2019) and a CycleGAN framework adapted for video-to-audio translation. Additionally, we compare with the model proposed by Vougioukas et al. (2019), which is designed for video-driven speech reconstruction and uses a perceptual loss to accurately capture the spoken content. An Adam optimiser is used with a learning rate of 0.0001 for the video-to-audio network and a learning rate of 0.001 for the audio-to-video network. The balancing parameter γ is set to 0.5.
We evaluate the quality of the synthesized audio based on intelligibility and spoken word accuracy. We measure speech quality using the mean Mel Cepstral Distance (MCD) (Kubichek, 1993), which measures the distance between two signals in the mel-frequency cepstrum and is often used to assess synthesized speech. Furthermore, we use the Short-Time Objective Intelligibility (STOI) (Taal et al., 2011) and Perceptual Evaluation of Speech Quality (PESQ) (Rix et al., 2001) metrics, which measure the intelligibility of the synthesized audio. Finally, in order to verify the semantic consistency of the spoken message we use a pretrained automatic speech recognition (ASR) model and measure the Word Error Rate (WER). The results for the speech-reconstruction task are shown in Table 3.
The results of Table 3 show that our method is capable of producing intelligible speech and achieving similar performance to the model proposed by Vougioukas et al. (2019). Furthermore, the large WER for both baselines highlights the limitations of cGANs and CycleGANs for cross-modal translation. Although our approach is better at capturing the content and audio-visual correspondence, we notice that samples all share the same robotic voice compared to the other methods. This is expected since discrimination using our approach focuses mostly on audio-visual correspondence and not capturing the speaker identity. Examples of synthesized waveforms and their spectrograms are shown in Section A.6 of the appendix and samples are provided in the supplementary material.
Ethical considerations: We have tested the DINO model on this task as an academic investigation to test its ability to capture common semantics even across modalities. Video-driven speech reconstruction has many practical applications especially in digital communications. It enables videoconferencing in noisy or silent environments and can improve hearing-assistive devices. However, this technology can potentially be used in surveillance systems which raises privacy concerns. Therefore, although we believe that this topic is worth exploring, future researchers should be careful when developing features that will enable this technology to be used for surveillance purposes.
5 CONCLUSIONS
In this paper we have presented a domain translation framework, based on predictive conditioning. Unlike other conditional approaches, predicting the condition forces the discriminator to learn the the relationship between domains and ensures that the generated samples preserve cross-domain semantics. The results on image-to-image translation verify that our approach is capable of producing sharp and realistic images while strongly enforcing semantic correspondence between domains. Furthermore, results on video-driven speech reconstruction show that our method is applicable to a wide range of problems and that correspondence can be maintained even when translating across different modalities. Finally, we present a method for bidirectional translation and show that it achieves the same performance while reducing the number of training parameters compared to other models.
A APPENDIX
A.1 NETWORK ARCHITECTURE
A.1.1 IMAGE-TO-IMAGE TRANSLATION
This section describes the network architecture used for the image-to-image translation experiments in Section 4.1. The two networks used in the DINO framework are identical and both use a U-Net encoder-decoder architecture similar to that used in Pix2Pix (Isola et al., 2017). The encoder is a 7-layer Convolutional Neural Network (CNN) made of strided 2D convolutions. The decoder is a 12-layer CNN made of 2D convolutions and up-sampling layers. We use Instance Normalization (Ulyanov et al., 2016), which has been shown to work well in style transfer applications. The network is shown in detail in Figure 5.
A.1.2 VIDEO-DRIVEN SPEECH RECONSTRUCTION
This section describes the architecture of the networks used for video-driven speech reconstruction in the experiments of Section 4.2. In this scenario the Forward network synthesizes speech and the Reverse network performs speech-driven facial animation. The Forward network is made up of a Video Encoder, a single-layer GRU and an Audio Decoder. The video sequence is fed to the Video Encoder, which uses spatio-temporal convolutions to produce an embedding per video frame. The embeddings are fed to a single-layer GRU to create a coherent sequence of representations which is then passed to an Audio Decoder network which will produce 640 audio samples per embedding. Concatenating these chunks of samples without overlap forms a waveform. Both the Video Encoder and Audio Decoder are fully convolutional networks, with the Audio Decoder using an additional self-attention layer (Zhang et al., 2019) before the last layer as shown in Figure 6.
The Reverse network is made up of two encoders responsible for capturing the speaker identity and content. The content stream uses a sliding window approach to create a sequence of embeddings for the audio using an Audio Encoder and a 2-layer GRU. The identity stream consists of an Identity Encoder which captures the identity of the person and enforces it on the generated video. The two embeddings are concatenated and fed to a Frame Decoder which produces a video sequence. Skip connections between the Identity Encoder and Frame Decoder ensure that the face is accurately reconstructed. A detailed illustration of the Reverse network is shown in Figure 7.
A.2 CELEBA SEGMENTATION
In Table 1 we notice that the segmentation evaluation on generated images surpasses that of real images. The reason for this are some inconsistencies in the labelled images. Examples in Figure 8
show that in these cases some objects are labeled despite being occluded in the real image. However, these objects will appear in the generated images since the labelled images are used to drive their generation. These small inconsistencies in the data annotations explain why segmentation is slightly better for synthesized samples.
A.3 ABLATION STUDY
In order to measure the effect of the reconstruction loss and adaptive balancing used in the DINO framework we perform an ablation study on the CelebAMask-HQ dataset. The results of the study are shown in Table 4. As expected the addition of the L1 loss results in a higher PSNR and SSIM since these metrics depend on the reconstruction error, which is directly optimised by this loss. More importantly, we note that the addition of the L1 loss improves the FID score since it prevents mode collapse. This is evident when observing the examples shown in Figure 9, which shows that modedropping that occurs in both DINO and Pix2Pix when this loss is omitted. Finally, we notice that the adaptive balancing used in DINO allows for more stable training and improves performance, which is reflected across all metrics.
A.4 ADAPTIVE BALANCING
As mentioned in Section 3 DINO uses a controller to ensure that the energy of generated samples is always a fixed multiple of the energy of real samples. Although this approach is similar to that used by BEGAN (Berthelot et al., 2017) there is a key difference. BEGANs perform autoencoding and therefore assume that the discriminator’s reconstruction error will be larger for real samples since they have more details which are harder to reconstruct. For this reason, the controller used by BEGAN tries to maintain a balance throughout training where L(xreal) > L(xfake). In the DINO framework the discriminator performs domain translation therefore it is natural to assume that real samples should produce better reconstructions since they contain useful information regarding the semantics. For this reason we choose to maintain a balance where L(xfake) > L(xreal). This is reflected in the controller update as well as the balance parameter of DINO which is the inverse of that used in BEGANs.
As we mentioned the core difference with the adaptive balancing in BEGAN is that DINO maintains a balance where L(xfake) > L(xreal) whereas BEGAN maintains a balance for which L(xreal) > L(xfake). This makes BEGAN unsuitable for use with the predictive conditioning proposed in this paper since it allows the generator to “hide” information about the condition in synthesized samples. The generator thus tricks the discriminator into producing a much better reconstruction of the condition for fake samples without the need for them to be realistic. Since the controller prevents the discriminator from pushing fake samples to a higher energy than real samples (i.e. the controller output is zero when fake samples have higher energy) this behaviour is not prohibited by BEGANs throughout training.
The method used in DINO however does not have this problem since it encourages the discriminator to assign higher energies to unrealistic samples thus penalizing them and preventing the generator from “cheating” in the same way as BEGAN. To show this effect we train a conditional BEGAN
and the DINO framework to perform translation from photo to sketch using the APDrawings dataset from (Yi et al., 2019). Figure 10 shows how the balancing used in DINO allows the network to penalize unrealistic images by encouraging the discriminator to assign to them energies larger than the real samples. We note that this problem occurs only in cases where the source domain is more informative than the target domain (i.e. photo → sketch). This does not occur in cases where the source domain in more generic than the target domain (i.e. segmentation map→ photo)
A.5 QUALITATIVE RESULTS
A.5.1 IMAGE-TO-IMAGE TRANSLATION
CelebAMask-HQ
Examples of image-to-image translation from segmentation maps to photos for the CelebMask-HQ dataset are shown in Figure 11. We note that our approach is able to maintain semantics and produce realistic results even in cases with extreme head poses and facial expressions.
Cityscapes
Examples of image-to-image translation from segmentation maps to photos for the Cityscapes dataset are shown in Figure 12.
A.6 VIDEO-TO-SPEECH TRANSLATION
This section presents examples of waveforms produced by the methods compared in Table 3. In addition to the waveforms we also present their corresponding spectrograms. The waveforms and spectrograms are shown in Figure 13. It is evident from the shape of the waveform that our method more accurately captures voiced sections in the audio. Furthermore, the spectrogram of our method is closely resembles that of the ground truth although some high frequency components are not captured. The performance is similar to the Perceptual GAN proposed by Vougioukas et al. (2019) although our method relies on only an adversarial loss. | 1. What is the main contribution of the paper, and how does it differ from prior works in the field?
2. What are the strengths and weaknesses of the proposed approach, particularly in its application to image-to-image translation tasks?
3. How does the reviewer assess the clarity and quality of the writing in the paper?
4. What are some questions or concerns the reviewer has regarding the experimental setup, dataset choice, and comparisons with other works in the field?
5. Are there any specific aspects of the paper that the reviewer found difficult to understand or felt were not adequately explained? | Review | Review
This paper proposes a conditional energy-based GAN technique for translation between data domains.
First, let me preface this review by noting that this paper is far outside of my area of expertise. I have tried to do my best in reviewing it, but I'd appreciate any clarification of mistaken points from the authors.
Overall, the idea itself seems reasonable: in conditional GAN-based models, instead of using a discriminator that explicitly tries to predict whether the generated output is true or fake, the discriminator tries to maximize the reconstruction score of true outputs, and minimize it for fake outputs.
However, there were many questions I had based on just reading the paper. I'm not sure whether this is due to my lack of background knowledge in this field, or because the writing itself is unclear (perhaps a bit of both):
First big question: from my reading it seems that this is a supervised model, in that it needs
(
x
,
y
)
pairs to calculate the objective in Equation (2). Is this correct? It doesn't seem to be explicitly stated anywhere.
Given this, I was not sure if the baselines in Table 1 and 2 actually represent the state-of-the-art in this domain. Pix2pix seems to be from 2017, which seems to be quite old given the huge progress this field has made in the past 3 years. BiCycleGAN, according to my understanding, is an unsupervised method, which presumably will do much worse than supervised methods.
Are the datasets in 4.1 standard and used in the literature? If so, what are some recent papers that evaluate on these datasets? If not, why use these datasets instead of others? While I'm not very familiar with the field, I do know that image style transfer is a big thing, and surely there are other datasets that people have evaluated on previously.
The description of Equation (1) was a bit hard to follow, as the role of the discriminator was not made explicit. The "margin loss" was also not explained concretely.
I was not able to understand the description of
γ
in Equation (3), please elaborate if possible. The gain value
λ
was also not easy to follow.
It was mentioned that MirrorGAN is the most similar method. While there was an explanation that DINO is simpler than MirrorGAN, it would be nice to explain the implications of this. Does this just mean that DINO is a bit easier to implement? Or does it mean that it is fundamentally applicable to a wider variety of tasks? Also, why is there no empirical comparison with MirrorGAN? |
ICLR | Title
DINO: A Conditional Energy-Based GAN for Domain Translation
Abstract
Domain translation is the process of transforming data from one domain to another while preserving the common semantics. Some of the most popular domain translation systems are based on conditional generative adversarial networks, which use source domain data to drive the generator and as an input to the discriminator. However, this approach does not enforce the preservation of shared semantics since the conditional input can often be ignored by the discriminator. We propose an alternative method for conditioning and present a new framework, where two networks are simultaneously trained, in a supervised manner, to perform domain translation in opposite directions. Our method is not only better at capturing the shared information between two domains but is more generic and can be applied to a broader range of problems. The proposed framework performs well even in challenging cross-modal translations, such as video-driven speech reconstruction, for which other systems struggle to maintain correspondence.
1 INTRODUCTION
Domain translation methods exploit the information redundancy often found in data from different domains in order to find a mapping between them. Successful applications of domain translation include image style transfer (Zhu et al., 2017a) and speech-enhancement (Pascual et al., 2017). Furthermore, these systems are increasingly being used to translate across modalities in applications such as speech-driven animation (Chung et al., 2017) and caption-based image generation (Reed et al., 2016). Some of the most popular methods for domain translation are based on conditional Generative Adversarial Networks (cGANs) (Mirza & Osindero, 2014). The conditional information in cGANs is used to drive the generation and to enforce the correspondence between condition and sample. Various alternatives have been proposed for how the condition should be included in the discriminator (Miyato & Koyama, 2018; Reed et al., 2016) but the majority of frameworks provide it as an input, hoping that the sample’s correlation with the condition will play a role in distinguishing between synthesized and genuine samples. The main drawback of this approach is that it does not encourage the use of the conditional information and therefore its contribution can be diminished or even ignored. This may lead to samples that are not semantically consistent with the condition.
In this paper, we propose the Dual Inverse Network Optimisation (DINO) framework1 which is based on energy-based GANs (Zhao et al., 2017) and consists of two networks that perform translation in opposite directions as shown in Figure 1. In this framework, one network (Forward network) translates data from the source domain to the target domain while the other (Reverse Network) performs the inverse translation. The Reverse network’s goal is to minimize the reconstruction error for genuine data and to maximize it for generated data. The Forward network aims to produce samples that can be accurately reconstructed back to the source domain by the Reverse Network. Therefore, during training the Forward network is trained as a generator and the Reverse as a discriminator. Since discrimination is based on the ability to recover source domain samples, the Forward network is driven to produce samples that are not only realistic but also preserve the shared semantics. We show that this approach is effective across a broad range of supervised translation problems, capturing the correspondence even when domains are from different modalities (i.e., video-audio). In detail, the contributions of this paper are:
1Source code: https://github.com/DinoMan/DINO
• A domain translation framework, based on a novel conditioning mechanism for energybased GANs, where the adversarial loss is based on the prediction of the condition. • An adaptive method for balancing the Forward and Reverse networks, which makes training
more robust and improves performance. • A method for simultaneously training two networks to perform translation in inverse direc-
tions, which requires fewer parameters than other domain translation methods. • The first end-to-end trainable model for video-driven speech reconstruction capable of pro-
ducing intelligible speech without requiring task-specific losses to enforce correct content.
2 RELATED WORK
Domain translation covers a wide range of problems including image-to-image translation (Isola et al., 2017), caption-based image synthesis (Qiao et al., 2019), and text-to-speech synthesis (Arik et al., 2017). Unsupervised translation methods attempt to find a relationship between domains using unpaired training data. However, finding correspondence without supervision is an ill-posed problem which is why these methods often impose additional constraints on their networks or objectives. The majority of unsupervised methods are applied to image-to-image translation problems. The CoGAN model (Liu & Tuzel, 2016) imposes a weight-sharing constraint on specific layers of two GANs, which are trained to produce samples from different domains. The motivation is that sharing weights in layers associated with high-level features should help preserve the overall structure of the images. This approach is extended in the UNIT framework (Liu et al., 2017), where the generative networks are Variational Autoencoders (VAEs) with a shared latent space. The weight-sharing used in the CoGAN and UNIT frameworks restricts them to problems where both domains are of the same modality. A more generic method of achieving domain-correspondence is presented in the CycleGAN model proposed by Zhu et al. (2017a). The CycleGAN objective includes a cycle-consistency loss to ensure that image translation between two domains is invertible. Recently, Chen et al. (2020) showed that reusing part of the discriminators in CycleGAN as encoders for the generators achieves parameter reduction as well as better results. Although it is possible to apply the cycle consistency loss for cross-modal translation it has not been widely used in such scenarios.
Unlike unsupervised methods, supervised approaches rely on having a one-to-one correspondence between the data from different domains. The Pix2Pix model (Isola et al., 2017) uses cGANs to perform image-to-image translation and has inspired many subsequent works (Zhu et al., 2017a; Wang et al., 2018; Park et al., 2019). Compared to unsupervised methods, supervised approaches have had more success in translating across different modalities. Notable applications include speechdriven facial animation (Vougioukas et al., 2020) and text-to-image synthesis (Reed et al., 2016; Qiao et al., 2019). It is important to note that the adversarial loss in cGANs alone is often not capable of establishing domain correspondence, which is why these approaches also rely on additional reconstruction or perceptual losses (Johnson et al., 2016) in order to accurately capture semantics.
In many scenarios, the relationship between domains is not bijective (e.g. one-to-many mapping) hence it is desirable for translation systems to produce a diverse set of outputs for a given input. Achieving this diversity is a common issue with GAN-based translation systems (Isola et al., 2017; Liu et al., 2017) since they often suffer from mode collapse. The Pix2Pix model (Isola et al., 2017) proposes using dropout in both training and inference stages as a solution to this problem. Another successful approach is to apply the diversity regularisation presented in Yang et al. (2019). Furthermore, many works (Zhu et al., 2017b; Huang et al., 2018; Chang et al., 2018) attempt to solve this issue by enforcing a bijective mapping between the latent space and the target image domain. Finally, adding a reconstruction loss to the objective also discourages mode collapse (Rosca et al., 2017), by requiring that the entire support of the distribution of training images is covered.
2.1 CONDITIONAL GANS
The most common method for conditioning GANs is proposed by Mirza & Osindero (2014) and feeds the conditional information as input to both the generator and the discriminator. Using the condition in the discriminator assumes that the correlation of samples with the condition will be considered when distinguishing between real and fake samples. However, feeding the condition to the discriminator does not guarantee that the correspondence will be captured and could even lead to the condition being ignored by the network. This issue is shared across all methods which use the condition as input to the discriminator (Miyato & Koyama, 2018; Reed et al., 2016). Furthermore, it explains why these models perform well when there is structural similarity between domains (e.g. image-to-image translation) but struggle to maintain semantics in cases where domains are significantly different such as cross-modal applications (e.g. video-to-speech).
Another method presented in Park et al. (2019) proposes generator conditioning through spatiallyadaptive normalisation layers (SPADE). This approach has been used to produce state of the art results in image generation. It should be noted that this approach requires that source domain data be one-hot encoded semantic segmentation maps and is therefore limited to specific image-translation problems (i.e. segmentation maps to texture image translations). More importantly, conditioning of the discriminator is still done by feeding the condition as an input and hence will have similar drawbacks as other cGAN based methods with regards to semantic preservation.
In some cases it is possible to guide the discriminator to learn specific semantics by performing a self-supervised task. An example of this is the discriminator proposed in Vougioukas et al. (2020) which enforces audio-visual synchrony in facial animation by detecting in and out of sync pairs of video and audio. However, this adversarial loss alone can not fully enforce audio-visual synchronization which is why additional reconstruction losses are required. Finally, it is important to note that finding a self-supervised task capable of enforcing the desired semantics is not always possible.
2.2 ENERGY-BASED GANS
Energy-based GANs (Mathieu et al., 2015; Berthelot et al., 2017) use a discriminator D which is an autoencoder. The generator G synthesizes a sample G(z) from a noise sample z ∈ Z . The discriminator output is fed to a loss function L in order to form an energy function LD(·) = L (D(·)). The objective of the discriminator is to minimize the energy assigned to real data x ∈ X and maximize the energy of generated data. The generator has the opposite objective, leading to the following minimax game:
min D max G V (D,G) = LD(x)− LD(G(z)) (1)
The EBGAN model proposed by Mathieu et al. (2015) uses the mean square error (MSE) to measure the reconstruction and a margin loss to limit the penalization for generated samples. The resulting objective thus becomes:
min D max G V (D,G) = ‖D(x)− x‖+max(0,m− ‖D(G(z))−G(z)‖), (2)
The marginm corresponds to the maximum energy that should be assigned to a synthesized sample. Performance depends on the magnitude of the margin, with large values causing instability and small values resulting in mode collapse. For this reason, some approaches (Wang et al., 2017; Mathieu et al., 2015) recommend decaying the margin during training. An alternative approach is proposed by Berthelot et al. (2017) which introduces an equilibrium concept to balance the generator
and discriminator and measure training convergence. Energy-based GANs have been successful in generating high quality images although their use for conditional generation is limited.
3 METHOD
The encoder-decoder structure used in the discriminator of an energy-based GAN gives it the flexibility to perform various regression tasks. The choice of task determines how energy is distributed and can help the network focus on specific characteristics. We propose a conditional version of EBGAN where the generator (Forward network) and discriminator (Reverse network) perform translations in opposite directions. The Reverse network is trained to minimize the reconstruction error for real samples (low energy) and maximize the error for generated samples (high energy). The Forward network aims to produce samples that will be assigned a low energy by the Reverse network. Generated samples that do not preserve the semantics can not be accurately reconstructed back to the source domain and are thus penalized. Given a condition x ∈ X and its corresponding target y ∈ Y and networks F : X → Y and R : Y → X the objective of the DINO framework becomes:
min R max F V (R,F ) = L (R(y), x)−L (R(F (x)), x), (3)
where L (·, ·) is a loss measuring the reconstruction error between two samples. Multiple choices exist for the loss function and their effects are explained in Lecun et al. (2006). We propose using the MSE to measure reconstruction error and a margin loss similar to that used in EBGAN. However, as shown in Mathieu et al. (2015) this method is sensitive to the value of margin parameter m, which must be gradually decayed to avoid instability. We propose using an adaptive method inspired by BEGAN (Berthelot et al., 2017) which is based on maintaining a fixed ratio γ ∈ [0, 1) between the reconstruction of positive and negative samples.
γ = L (R(y), x)
L (R(F (x)), x) (4)
Balancing is achieved using a proportional controller with gain λ. A typical value for the gain is λ = 0.001. The output of the controller kt ∈ [0, 1] determines the amount of emphasis that the Reverse network places on the reconstruction error of generated samples. The balance determines an upper bound for the energy of fake samples, which is a fixed multiple of the energy assigned to real samples. When the generator is producing samples with a low energy they are pushed to this limit faster than when the generator is already producing high-energy samples. Since the ratio of reconstruction errors is kept fixed this limit will decay as the reconstruction error for real samples improves over time. This achieves a similar result to a decaying margin loss without the necessity for a decay schedule. The output of the controller as well as the reconstruction error for real and fake samples during training is shown in Figure 2. We notice that the controller output increases at the start of training in order to push generated samples to a higher energy value and reduces once the limit determined by γ is reached. Although this approach is inspired by BEGAN there are some key differences which prevent the BEGAN from working with the predictive conditioning proposed in this paper. These are discussed in detail in Section A.4 of the appendix.
In practice we find it advantageous to use the margin loss in combination with adaptive balancing. In this case the margin parameter serves as a hard cutoff for the energy of generated samples and
helps stabilize the system at the beginning of training. As training progresses and reconstruction of real samples improves training relies more on the soft limit enforced by the energy balancing mechanism. In this case we can set γ = 0 to fall back to a fixed margin approach. The training objective is shown in Equation 5. When dealing with one-to-many scenarios we find that adding a reconstruction loss to the generator’s objective can help improve sample diversity.
LR = ‖R(y)− x‖+ kt ·max(0,m− ‖R(F (x))− x‖) LF = ‖R(F (x))− x‖ kt+1 = kt + λ · [‖R(y)− x‖ − γ · ‖R(F (x))− x‖]
(5)
3.1 BIDIRECTIONAL TRANSLATION
It is evident from Equation 5 that the two networks have different goals and that only the Forward network will produce realistic translations since the Reverse network is trained only using an MSE loss. This prohibits its use for domain translation and limits it to facilitating the training of the Forward network. For the DINO framework, since the Forward and Reverse network have the same structure we can swap the roles of the networks and retrain to obtain realistic translation in the opposite direction. However, it is also possible to train both networks simultaneously by combining the objectives for both roles (i.e. discriminator and generator). This results in the following zero-sum two player game:
min R max F V (R,F ) = L (R(y), x)−L (R(F (x)), x) + L (F (R(y)), y)−L (F (x), y) (6)
In this game both players have the same goal which is to minimize the reconstruction error for real samples and to maximize it for fake samples while also ensuring that their samples are assigned a low energy by the other player. Each player therefore behaves both as a generator and as a discriminator. However, in practice we find that is difficult for a network to achieve the objectives for both roles, causing instability during training. The work proposed by Chen et al. (2020), where discriminators and generators share encoders, comes to a similar conclusion and proposes decoupling the training for different parts of the networks. This is not possible in our framework since the discriminator for one task is the generator for the other. To solve this problem we propose branching the decoders of the networks to create two heads which are exclusively used for either discrimination or generation. We find empirically that the best performance in our image-to-image experiments is achieved when branching right after the third layer of the decoder. Additionally, the network encoders are frozen during the generation stage. The bidirectional training paradigm is illustrated in Figure 3.
When training network R as a discriminator we use the stream that passes through the discriminative head Rdisc and when training as a generator we use the stream that uses the generative head
Rgen. The same applies for player F and which uses streams Fdisc and Fgen for discrimination and generation, respectively. To maintain balance during training we use a different controller for each player which results the objective shown in Equation 7. The first two terms in each players objective represent the player’s goal as a discriminator and the last term reflects its goal as a generator. LR = L (Rdisc(y), x)− kt ·L (Rdisc(Fgen(x))− x)︸ ︷︷ ︸ discriminator objective +L (Fdisc(Rgen(y)), y)︸ ︷︷ ︸ generator objective LF = L (Fdisc(x), y)− µt ·L (Fdisc(Rgen(y)), y)︸ ︷︷ ︸ discriminator objective +L (Rdisc(Fgen(x)), x)︸ ︷︷ ︸ generator objective
kt+1 = kt + λR · [L (Rdisc(y), x)− γD ·L (Rdisc(Fgen(x)), x)] µt+1 = µt + λF · [L (Fdisc(x), y)− γG ·L (Fdisc(Rgen(y)), y)]
(7)
3.2 COMPARISON WITH OTHER METHODS
As mentioned in section 2.1 the cGAN conditioning mechanism, used in most supervised translation systems, struggles to preserve the shared semantics in cases where there is no structural similarity between domains. The DINO framework attempts to overcome this limitation by using a different paradigm, where the condition is predicted by the discriminator instead of being fed as an additional input, forcing the generator to maintain the common semantics. Our approach is inspired by several semi-supervised training techniques for GANs (Salimans et al., 2016; Odena et al., 2017; Springenberg, 2015), which have showed that specializing the discriminator by performing a classification task adds structure to its latent space and improves the quality of generated samples. However, these approaches are not designed for conditional generation and use classification only as a proxy task. This differs from our approach where discrimination is driven directly by the prediction of condition.
Another advantage of our system stems from its use of an encoder-decoder structure for the Reverse network. This provides flexibility since the Reverse network can be easily adapted to perform a variety of different translation tasks. In contrast, the multi-stream discriminators used in crossmodal cGANs require fusing representations from different streams. The fusion method as well as the stage at which embeddings are fused is an important design decision that must be carefully chosen depending on the task since it can greatly affect the performance of these models.
The objective of the generator in Equation 5 resembles the cycle-consistency loss used in many unsupervised methods such as CycleGAN (Zhu et al., 2017a) and NICE-GAN (Chen et al., 2020). This also bears resemblance to the back-translation used in bidirectional neural machine translation methods (Artetxe et al., 2018; Lample et al., 2018). However, it is important to note that the cycleconsistency loss used in these approaches is not an adversarial loss since it is optimized with respect to both networks’ parameters. The most similar work to ours is MirrorGAN (Qiao et al., 2019), which improves the generation of images through re-description. This model however uses a pretrained network for re-description in addition to an adversarial loss. Compared to all aforementioned approaches the DINO framework is the only one in which the adversarial loss alone can both achieve sample realism while enforcing correspondence. Finally, since our bidirectional framework uses the generators for discrimination it requires far fewer parameters than these approaches.
4 EXPERIMENTS
We evaluate the DINO framework on image-to-image translation since this the most typical application for domain-translation systems. Additionally, we tackle the problem of video-driven speech reconstruction, which involves synthesising intelligible speech from silent video. In all of the experiments focus is placed not only on evaluating the quality of the generated samples but also verifying that the semantics are preserved after translation.
4.1 IMAGE-TO-IMAGE TRANSLATION
The majority of modern domain translation methods have been applied to image-to-image translation problems, since it is common for both domains to share high-level structure and therefore easier to capture their correspondence. We evaluate the DINO framework on the CelebAMask-HQ (Lee et al., 2020) and the Cityscapes (Cordts et al., 2016) datasets, using their recommended training-test splits.
When judging the performance of image-to-image translation systems one must consider multiple factors including the perceptual quality, the semantic consistency and the diversity of the generated images. We therefore rely on a combination of full-reference reconstruction metrics and perceptual metrics for image assessment.
Reconstruction metrics such as the peak signal-to-noise ratio (PSNR) and the structural similarity index (SSIM) measure the deviation of generated images from the ground truth. Although these metrics are good at measuring image distortion they are usually poor indicators of realism and they penalize diversity. For this reason, we also measure the perceptual quality of the images by using the Fréchet Inception Distance (FID), which compares the statistics of the embeddings of real and fake images in order to measure the quality and diversity. Furthermore, we use the cumulative probability blur detection (CPBD) metric (Narvekar & Karam, 2009) to assess image sharpness. Finally, we use pre-trained semantic segmentation models to verify that image semantics are accurately captured in the images. For the CelebAMask-HQ dataset we use the segmentation model from Lee et al. (2020) and for the Cityscapes dataset we use a DeepLabv3+ model (Chen et al., 2018). We report the pixel accuracy as well as the average intersection over union (mIOU).
We compare our method to other supervised image-to-image translation models, such as Pix2Pix2 and BiCycleGAN3. Since DINO is a generic translation method comparing it to translation methods that are tailored to a specific type of translation (Yi et al., 2019) is an unfair comparison since these methods make use of additional information or use task-specific losses. Nevertheless, we present the results for SPADE4 (Park et al., 2019) on the Cityscapes dataset in order to see how well our approach performs compared to state-of-the-art task-specific translation methods. Since the pretrained SPADE model generates images at a resolution of 512× 256 we resize images to 256× 256 for a fair comparison.
When training the DINO model we resize images to 256 × 256 and use a networks with a similar U-Net architecture to the Pix2Pix model to ensure a fair comparison. The architecture of the networks used in these experiments can be found in section A.1.1 of the appendix. Additionally, like Pix2Pix we use an additional L1 loss to train the Forward network (generator), which helps improve image diversity. The balance parameter γ is set to 0.8 for image-to-image translation experiments. We train using the Adam optimizer (Kingma & Ba, 2015), with a learning rate of 0.0002, and momentum parameters β1 = 0.5, β2 = 0.999. The quantitative evaluation on the CelebAMask-HQ and Cityscapes datasets is shown in Tables 1 and 2. Qualitative results are presented in Section A.5.1 of the appendix.
The results on Tables 1 and 2 show that our method outperforms the Pix2Pix and BicycleGAN models both in terms of perceptual quality as well as reconstruction error. More importantly, our approach is better at preserving the image semantics as indicated by the higher pixel accuracy and mIoU. We notice that for the CelebAMask-HQ dataset the segmentation accuracy is better for generated images than for real images. This phenomenon is due to some inconsistent labelling and is explained in Section A.2 of the appendix. We also note that the bidirectional DINO framework can simultaneously train two networks to perform translation in both directions without a sacrificing quality and with fewer parameters. Finally, an ablation study for our model is performed in A.3 of the appendix.
When comparing our results to those achieved by the SPADE network on the Cityscapes dataset we notice that our model performs similarly, achieving slightly better performance on reconstruction metrics (PSNR, SSIM) and slightly worse performance for preserving the image semantics. This is expected since the SPADE model has been specifically designed for translation from segmentation maps to images. Furthermore, the networks used in these experiments, for the DINO framework, are far simpler (37 million parameters in the generator compared to 97 million). More importantly, unlike SPADE our network can be applied to any task and perform the translation in both directions.
4.2 VIDEO-DRIVEN SPEECH RECONSTRUCTION
Many problems require finding a mapping between signals from different modalities (e.g. speechdriven facial animation, caption-based image generation). This is far more challenging than imageto-image translation since signals from different modalities do not have structural similarities, making it difficult to capture their correspondence. We evaluate the performance of our method on video-driven speech reconstruction, which involves synthesising speech from a silent video. This is a notoriously difficult problem due to ambiguity which is attributed to the existence of homophenous words. Another reason for choosing this problem is that common reconstruction losses (e.g. L1, MSE), which are typically used in image-to-image translation to enforce low-frequency correctness (Isola et al., 2017) are not helpful for the generation of raw waveforms. This means that methods must rely only on the conditional adversarial loss to enforce semantic consistency.
We show that the DINO framework can synthesize intelligible speech from silent video using only the adversarial loss described in Equation 5. Adjusting the framework for this task requires using encoders and decoders that can handle audio and video as shown in Figure 4. The Forward network transforms a sequence of video frames centered around the mouth to its corresponding waveform. The Reverse network is fed a waveform and the initial video frame to produce a video sequence of the speaker. The initial frame is provided to enforce the speaker identity and ensures that the reconstruction error will be based on the facial animation and not on any differences in appearance. This forces the network to focus on capturing the content of the speech and not the speaker’s identity.
Experiments are performed on the GRID dataset (Cooke et al., 2006), which contains short phrases spoken by 33 speakers. There are 1000 phrases per speaker, each containing 6 words from a vocabulary of 51 words. The data is split according to Vougioukas et al. (2019) so that the test set contains unseen speakers and phrases. As baselines for comparison we use a conditional version of WaveGAN (Donahue et al., 2019) and a CycleGAN framework adapted for video-to-audio translation. Additionally, we compare with the model proposed by Vougioukas et al. (2019), which is designed for video-driven speech reconstruction and uses a perceptual loss to accurately capture the spoken content. An Adam optimiser is used with a learning rate of 0.0001 for the video-to-audio network and a learning rate of 0.001 for the audio-to-video network. The balancing parameter γ is set to 0.5.
We evaluate the quality of the synthesized audio based on intelligibility and spoken word accuracy. We measure speech quality using the mean Mel Cepstral Distance (MCD) (Kubichek, 1993), which measures the distance between two signals in the mel-frequency cepstrum and is often used to assess synthesized speech. Furthermore, we use the Short-Time Objective Intelligibility (STOI) (Taal et al., 2011) and Perceptual Evaluation of Speech Quality (PESQ) (Rix et al., 2001) metrics, which measure the intelligibility of the synthesized audio. Finally, in order to verify the semantic consistency of the spoken message we use a pretrained automatic speech recognition (ASR) model and measure the Word Error Rate (WER). The results for the speech-reconstruction task are shown in Table 3.
The results of Table 3 show that our method is capable of producing intelligible speech and achieving similar performance to the model proposed by Vougioukas et al. (2019). Furthermore, the large WER for both baselines highlights the limitations of cGANs and CycleGANs for cross-modal translation. Although our approach is better at capturing the content and audio-visual correspondence, we notice that samples all share the same robotic voice compared to the other methods. This is expected since discrimination using our approach focuses mostly on audio-visual correspondence and not capturing the speaker identity. Examples of synthesized waveforms and their spectrograms are shown in Section A.6 of the appendix and samples are provided in the supplementary material.
Ethical considerations: We have tested the DINO model on this task as an academic investigation to test its ability to capture common semantics even across modalities. Video-driven speech reconstruction has many practical applications especially in digital communications. It enables videoconferencing in noisy or silent environments and can improve hearing-assistive devices. However, this technology can potentially be used in surveillance systems which raises privacy concerns. Therefore, although we believe that this topic is worth exploring, future researchers should be careful when developing features that will enable this technology to be used for surveillance purposes.
5 CONCLUSIONS
In this paper we have presented a domain translation framework, based on predictive conditioning. Unlike other conditional approaches, predicting the condition forces the discriminator to learn the the relationship between domains and ensures that the generated samples preserve cross-domain semantics. The results on image-to-image translation verify that our approach is capable of producing sharp and realistic images while strongly enforcing semantic correspondence between domains. Furthermore, results on video-driven speech reconstruction show that our method is applicable to a wide range of problems and that correspondence can be maintained even when translating across different modalities. Finally, we present a method for bidirectional translation and show that it achieves the same performance while reducing the number of training parameters compared to other models.
A APPENDIX
A.1 NETWORK ARCHITECTURE
A.1.1 IMAGE-TO-IMAGE TRANSLATION
This section describes the network architecture used for the image-to-image translation experiments in Section 4.1. The two networks used in the DINO framework are identical and both use a U-Net encoder-decoder architecture similar to that used in Pix2Pix (Isola et al., 2017). The encoder is a 7-layer Convolutional Neural Network (CNN) made of strided 2D convolutions. The decoder is a 12-layer CNN made of 2D convolutions and up-sampling layers. We use Instance Normalization (Ulyanov et al., 2016), which has been shown to work well in style transfer applications. The network is shown in detail in Figure 5.
A.1.2 VIDEO-DRIVEN SPEECH RECONSTRUCTION
This section describes the architecture of the networks used for video-driven speech reconstruction in the experiments of Section 4.2. In this scenario the Forward network synthesizes speech and the Reverse network performs speech-driven facial animation. The Forward network is made up of a Video Encoder, a single-layer GRU and an Audio Decoder. The video sequence is fed to the Video Encoder, which uses spatio-temporal convolutions to produce an embedding per video frame. The embeddings are fed to a single-layer GRU to create a coherent sequence of representations which is then passed to an Audio Decoder network which will produce 640 audio samples per embedding. Concatenating these chunks of samples without overlap forms a waveform. Both the Video Encoder and Audio Decoder are fully convolutional networks, with the Audio Decoder using an additional self-attention layer (Zhang et al., 2019) before the last layer as shown in Figure 6.
The Reverse network is made up of two encoders responsible for capturing the speaker identity and content. The content stream uses a sliding window approach to create a sequence of embeddings for the audio using an Audio Encoder and a 2-layer GRU. The identity stream consists of an Identity Encoder which captures the identity of the person and enforces it on the generated video. The two embeddings are concatenated and fed to a Frame Decoder which produces a video sequence. Skip connections between the Identity Encoder and Frame Decoder ensure that the face is accurately reconstructed. A detailed illustration of the Reverse network is shown in Figure 7.
A.2 CELEBA SEGMENTATION
In Table 1 we notice that the segmentation evaluation on generated images surpasses that of real images. The reason for this are some inconsistencies in the labelled images. Examples in Figure 8
show that in these cases some objects are labeled despite being occluded in the real image. However, these objects will appear in the generated images since the labelled images are used to drive their generation. These small inconsistencies in the data annotations explain why segmentation is slightly better for synthesized samples.
A.3 ABLATION STUDY
In order to measure the effect of the reconstruction loss and adaptive balancing used in the DINO framework we perform an ablation study on the CelebAMask-HQ dataset. The results of the study are shown in Table 4. As expected the addition of the L1 loss results in a higher PSNR and SSIM since these metrics depend on the reconstruction error, which is directly optimised by this loss. More importantly, we note that the addition of the L1 loss improves the FID score since it prevents mode collapse. This is evident when observing the examples shown in Figure 9, which shows that modedropping that occurs in both DINO and Pix2Pix when this loss is omitted. Finally, we notice that the adaptive balancing used in DINO allows for more stable training and improves performance, which is reflected across all metrics.
A.4 ADAPTIVE BALANCING
As mentioned in Section 3 DINO uses a controller to ensure that the energy of generated samples is always a fixed multiple of the energy of real samples. Although this approach is similar to that used by BEGAN (Berthelot et al., 2017) there is a key difference. BEGANs perform autoencoding and therefore assume that the discriminator’s reconstruction error will be larger for real samples since they have more details which are harder to reconstruct. For this reason, the controller used by BEGAN tries to maintain a balance throughout training where L(xreal) > L(xfake). In the DINO framework the discriminator performs domain translation therefore it is natural to assume that real samples should produce better reconstructions since they contain useful information regarding the semantics. For this reason we choose to maintain a balance where L(xfake) > L(xreal). This is reflected in the controller update as well as the balance parameter of DINO which is the inverse of that used in BEGANs.
As we mentioned the core difference with the adaptive balancing in BEGAN is that DINO maintains a balance where L(xfake) > L(xreal) whereas BEGAN maintains a balance for which L(xreal) > L(xfake). This makes BEGAN unsuitable for use with the predictive conditioning proposed in this paper since it allows the generator to “hide” information about the condition in synthesized samples. The generator thus tricks the discriminator into producing a much better reconstruction of the condition for fake samples without the need for them to be realistic. Since the controller prevents the discriminator from pushing fake samples to a higher energy than real samples (i.e. the controller output is zero when fake samples have higher energy) this behaviour is not prohibited by BEGANs throughout training.
The method used in DINO however does not have this problem since it encourages the discriminator to assign higher energies to unrealistic samples thus penalizing them and preventing the generator from “cheating” in the same way as BEGAN. To show this effect we train a conditional BEGAN
and the DINO framework to perform translation from photo to sketch using the APDrawings dataset from (Yi et al., 2019). Figure 10 shows how the balancing used in DINO allows the network to penalize unrealistic images by encouraging the discriminator to assign to them energies larger than the real samples. We note that this problem occurs only in cases where the source domain is more informative than the target domain (i.e. photo → sketch). This does not occur in cases where the source domain in more generic than the target domain (i.e. segmentation map→ photo)
A.5 QUALITATIVE RESULTS
A.5.1 IMAGE-TO-IMAGE TRANSLATION
CelebAMask-HQ
Examples of image-to-image translation from segmentation maps to photos for the CelebMask-HQ dataset are shown in Figure 11. We note that our approach is able to maintain semantics and produce realistic results even in cases with extreme head poses and facial expressions.
Cityscapes
Examples of image-to-image translation from segmentation maps to photos for the Cityscapes dataset are shown in Figure 12.
A.6 VIDEO-TO-SPEECH TRANSLATION
This section presents examples of waveforms produced by the methods compared in Table 3. In addition to the waveforms we also present their corresponding spectrograms. The waveforms and spectrograms are shown in Figure 13. It is evident from the shape of the waveform that our method more accurately captures voiced sections in the audio. Furthermore, the spectrogram of our method is closely resembles that of the ground truth although some high frequency components are not captured. The performance is similar to the Perceptual GAN proposed by Vougioukas et al. (2019) although our method relies on only an adversarial loss. | 1. What are the strengths and weaknesses of the paper regarding its contributions, experimental results, and ethical considerations?
2. How does the reviewer perceive the similarity between the proposed method and recent works in unsupervised machine translation?
3. What are the reviewer's concerns regarding the lip-reading task and privacy-related issues?
4. How could the authors improve the clarity and notation of Equation 6 and the surrounding text?
5. Are there any additional remarks or suggestions that the reviewer has for improving the paper? | Review | Review
This paper presents a method for performing cross-domain or cross-modality translation models using a GAN-flavored framework where two models are trained to translate in both directions simultaneously.
[As a caveat: I am not well-versed in this area of the literature]
The paper is well-written for the most part and the experimental results are promising. My main concern are the ethical implications of some of the lip-reading experiments which go unaddressed.
Pros
Clear presentation (mostly, see remarks for some exceptions)
Good results as far as I can tell (although it is hard to interpret what all the various metrics mean, but it does seem like DINO is consistently better than the alternatives along most metrics)
Experiments are not limited to image to image translation but also to "cross-modality translation" (image to text)
Cons
If I understand correctly, the video->speech task is essentially a lip-reading task. State-of-the-art lip reading raises a number of privacy related concerns, and I think the potential impact of this research should at least be acknowledged in the paper.
Remarks
The proposed method bears some conceptual similarity with recent work in unsupervised machine translation (see eg. Artetxe et al. https://arxiv.org/abs/1710.11041, Lample et al. https://arxiv.org/abs/1711.00043, also Lample et al. 2019 https://openreview.net/pdf?id=H1g2NhC5KQ which is particularly relevant as it tackles "style transfer" for text). These similarities are worth mentioning in the paper.
I found the use of the term "discriminator" confusing, especially in the beginning in the paper. It makes sense in the usual GAN setup where the discriminator is an actual discriminative model, but it seems inappropriate in this case where the "discriminator" is a generative model.
Eq. 6 (the main objective) is incredibly confusing. First, (and this relates to my last remark) the notation D_disc, D_gen, G_disc, G_gen unnecessarily confounding. Consider using "s -> t" and "t->s" for source to target and vice-versa instead of G and D. Also, perhaps color-coding Eq.6 would make it easier to parse (or maybe just separate the different terms a bit more). |
ICLR | Title
Tuning Recurrent Neural Networks with Reinforcement Learning
Abstract
The approach of training sequence models using supervised learning and next-step prediction suffers from known failure modes. For example, it is notoriously difficult to ensure multi-step generated sequences have coherent global structure. We propose a novel sequence-learning approach in which we use a pre-trained Recurrent Neural Network (RNN) to supply part of the reward value in a Reinforcement Learning (RL) model. Thus, we can refine a sequence predictor by optimizing for some imposed reward functions, while maintaining good predictive properties learned from data. We propose efficient ways to solve this by augmenting deep Q-learning with a cross-entropy reward and deriving novel off-policy methods for RNNs from KL control. We explore the usefulness of our approach in the context of music generation. An LSTM is trained on a large corpus of songs to predict the next note in a musical sequence. This Note-RNN is then refined using our method and rules of music theory. We show that by combining maximum likelihood (ML) and RL in this way, we can not only produce more pleasing melodies, but significantly reduce unwanted behaviors and failure modes of the RNN, while maintaining information learned from data.
1 INTRODUCTION
Generative modeling of music with deep neural networks is typically accomplished by training a RNN such as a Long Short-Term Memory (LSTM) network to predict the next note in a musical sequence (e.g. Eck & Schmidhuber (2002)). Similar to a Character RNN (Mikolov et al., 2010), these Note RNNs can be used to generate novel melodies by initializing them with a short sequence of notes, then repeatedly sampling from the model’s output distribution generated to obtain the next note. While melodies and text generated in this way have recently garnered attention1, this type of model tends to suffer from common failure modes, such as excessively repeating tokens, or producing sequences that lack a consistent theme or structure. Such sequences can appear wandering and random (see Graves (2013) for a text example).
Music compositions adhere to relatively well-defined structural rules, making music an interesting sequence generation challenge. For example, music theory tells that groups of notes belong to keys, chords follow progressions, and songs have consistent structures made up of musical phrases. Our research question is therefore whether such music-theory-based constraints can be learned by an RNN, while still allowing it to maintain note probabilities learned from data.
To approach this problem we propose RL Tuner, a novel sequence learning approach in which RL is used to impose structure on an RNN trained on data. The reward function in our framework combines task-related rewards with the probability of a given action originally learned by the pre-trained RNN. Thus, our model directly preserves inforamtion about the original probability distributions learned from data, while allowing us to explicitly control the trade-off between the influence of data
1http://www.theverge.com/2016/6/1/11829678/google-magenta-melody-art-generative-artificialintelligence
and heuristic rewards. This is an important novel direction of research, because in many tasks the available reward functions are not a perfect metric that alone will lead to the best task performance in the real world (e.g. BLEU score). Unlike previous work (e.g. (Ranzato et al., 2015), (Bahdanau et al., 2016), (Norouzi et al., 2016), (Li et al., 2016)) we do not use ML training as a way to simply bootstrap the training of an RL model, but rather we rely mainly on information learned from data, and use RL only as a way to refine characteristics of the output by imposing structural rules.
This paper contributes to the sequence training and RL literature by a) proposing a novel method for combining ML and RL training; b) showing the connection between this approach and Stochastic Optimal Control (SOC)/KL-control with a pre-trained RNN as a prior policy; c) showing the explicit relationships among a generalized variant of Ψ-learning (Rawlik et al., 2012), G-learning (Fox et al.), and Q-learning with log prior augmentation; d) being the first work to explore generalized Ψ-learning and G-learning with deep neural networks, serving as a reference for exploring KLregularized RL objectives with deep Q-learning; e) empirically comparing generalized Ψ-learning, G-learning, and Q-learning with log prior augmentation for the first time; and f) applying this new technique to the problem of music generation, and showing through an empirical study that this method produces melodies which are more melodic, harmonious, interesting, and rated as significantly more subjectively pleasing, than those of the original Note RNN. We suggest that the RL Tuner method could have potential applications in a number of areas as a general way to refine existing recurrent models trained on data by imposing constraints on their behavior.
2 BACKGROUND
2.1 DEEP Q-LEARNING
In RL, an agent interacts with an environment. Given the state of the environment at time t, st, the agent takes an action at according to its policy π(at|st), receives a reward r(st, at), and the environment transitions to a new state, st+1.The agent’s goal is to maximize reward over a sequence of actions, with a discount factor of γ applied to future rewards. The optimal deterministic policy π∗ is known to satisfy the following Bellman optimality equation,
Q(st, at;π ∗) = r(st, at) + γEp(st+1|st,at)[maxat+1 Q(st+1, at+1;π ∗)] (1)
where Qπ(st, at) = Eπ[ ∑∞ t′=t γ
t′−tr(st′ , at′)] is the Q function of a policy π. Q-learning techniques (Watkins & Dayan, 1992; Sutton et al., 1999) learn this optimal Q function by iteratively minimizing the Bellman residual. The optimal policy is given by π∗(a|s) = arg maxaQ(s, a). Deep Q-learning(Mnih et al., 2013) uses a neural network called the deep Q-network (DQN) to approximate the Q function Q(s, a; θ). The network parameters θ are learned by applying stochastic gradient descent (SGD) updates with respect to the following loss function,
L(θ) = Eβ [(r(s, a) + γmax a′
Q(s′, a′; θ−)−Q(s, a; θ))2] (2)
where β is the exploration policy, and θ− is the parameters of the Target Q-network (Mnih et al., 2013) that is held fixed during the gradient computation. The moving average of θ is used as θ− as proposed in (Lillicrap et al., 2016). Exploration can be performed with either the -greedy method or Boltzmann sampling. Additional standard techniques such as replay memory (Mnih et al., 2013) and Deep Double Q-learning (Hasselt et al., 2015) are used to stablize and improve learning.
2.2 MUSIC GENERATION WITH LSTM
Previous work with music generation using deep learning (e.g. (Eck & Schmidhuber, 2002), (Sturm et al., 2016)) has involved training an RNN to learn to predict the next note in a monophonic melody; we call this type of model a Note RNN. Often, the Note RNN is implemented using a Long ShortTerm Memory (LSTM) network (Gers et al., 2000). LSTMs are networks in which each recurrent cell learns to control the storage of information through the use of an input gate, output gate, and forget gate. The first two gates control whether information is able to flow into and out of the cell, and the latter controls whether or not the contents of the cell should be reset. Due to these properties, LSTMs are better at learning long-term dependencies in the data, and can adapt more rapidly to new data (Graves, 2013). A softmax function can be applied to the final outputs of the network to obtain
the probability the network places on each note, and softmax cross-entropy loss can be used to train the model via back propagation through time (BPTT) (Graves & Schmidhuber, 2005). However, as previously described, the melodies generated by this model tend to wander, and lack musical structure; we will show that they are also perceived as less musically pleasing by listeners. In the next section, we will show how to improve this model with RL.
3 RL TUNER DESIGN
Given a trained Note RNN, the goal is to teach it concepts about music theory, while still maintaining the information about typical melodies originally learned from data. To accomplish this task, we propose RL Tuner, a novel sequence training method incorporating RL. We use an LSTM trained on data (the Note RNN) to supply the initial weights for three networks in RL Tuner: the Q-network and Target Q-network in the DQN algorithm as described in Section 2.1, and a Reward RNN. Therefore, the Q-network is a recurrent LSTM model, with architecture identical to that of the original Note RNN. The Reward RNN is used to supply part of the reward value used to train the model, and is held fixed during training.
In order to formulate music generation as an RL problem, we treat placing the next note in the melody as taking an action. The state of the environment s consists of the previous note, and the internal state of the LSTM cells of both the Q-network and the Reward RNN. Thus, Q(a, s) can be calculated by initializing the recurrent Q-network with the appropriate memory cell contents, running it for one time step using the previous note, and evaluating the output value for the action a. The next action can be selected with either a Boltzmann sampling or -greedy exploration strategy.
Given action a, the reward can be computed by combining probabilities learned from the training data with knowledge of music theory. We define a set of music-theory based rules (described in Section 3.2) to impose constraints on the melody that the model is composing through a reward signal rMT (a, s). For example, if a note is in the wrong key, then the model receives a negative reward. However, it is necessary that the model still be “creative,” rather than learning a simple melody that can easily exploit these rewards. Therefore, we use the Reward RNN — or equivalently the trained Note RNN — to compute log p(a|s), the log probability of a note a given a melody s, and incorporate this into the reward function. Figure 1 illustrates these ideas.
The total reward given at time t is therefore:
r(s, a) = log p(a|s) + rMT (a, s)/c (3)
where c is a constant controlling the emphasis placed on the music theory reward. Given the DQN loss function in Eq. 2 and modified reward function in Eq. 3, the new loss function and learned policy for RL Tuner are,
L(θ) = Eβ [(log p(a|s) + rMT (a, s)/c+ γmax a′
Q(s′, a′; θ−)−Q(s, a; θ))2] (4)
πθ(a|s) = δ(a = arg max a Q(s, a; θ)). (5)
Thus, the modified loss function forces the model to learn that the most valuable actions are those that conform to the music theory rules, but still have high probability in the original data.
3.1 RELATIONSHIP TO KL CONTROL
The technique described in Section 3 has a close connection to stochastic optimal control (SOC) (Stengel, 1986) and in particular, KL control (Todorov, 2006; Kappen et al., 2012; Rawlik et al., 2012). SOC casts the optimal planning in stochastic environments as inference in graphical models, and enables direct application of probabilistic inference techniques such as ExpectationMaximization (EM) and message passing for solving the control problem (Attias, 2003; Toussaint & Storkey, 2006; Toussaint, 2009). Rawlik et al. (2012); Kappen et al. (2012) then introduced KL control, a generic formulation of the SOC as Kullback-Leibler (KL) divergence minimization, and connected to prior work on RL with additional KL cost (Todorov, 2006). Since our primary focus is to connect with DQNs, we specifically focus on the work by Rawlik et al. (2012) as they derive a temporal-difference-based approach on which we build our methods.
KL control formulation defines a prior dynamics or policy, and derives a variant of the control or RL problem as performing approximate inference in a graphical model. Let τ be a trajectory of state and action sequences, p(τ) be a prior dynamics, and r(τ) be the reward of the trajectory. Then, an additional binary variable b is introduced and a graphical model is defined as p(τ, b) = p(τ)p(b|τ), where p(b = 1|τ) = er(τ)/c and c is the temperature variable. An approximation to p(τ |b = 1) can be derived using the variational free-energy method, and this leads to a cost with a similar form to the RL problem previously defined, but with an additional penalty based on the KL divergence from the prior trajectory,
log p(τ |b = 1) = log ∫ p(τ)p(b|τ)dτ (6)
≥ Eq(τ)[log p(τ)p(b|τ)− log q(τ)] (7) = Eq(τ)[r(τ)/c− KL[q(τ)||p(τ)]] = Lv(q) (8)
where q(τ) is the variational distribution. Rewriting the variational objective Lv(q) in Eq. 6 in terms of policy πθ, we get the following RL objective with KL-regularization, also known as KL control,
Lv(θ) = Eπ[ ∑ t r(st, at)/c−KL[πθ(·|st)||p(·|st)]]. (9)
In contrast, the objective in Section 3 is, Lv(θ) = Eπ[ ∑ t r(st, at)/c+ log p(at|st)]. (10)
The difference is that Eq. 9 includes an entropy regularizer, and thus a different off-policy method from Q-learning is required. A generalization of Ψ-learning (Rawlik et al., 2012), and G-learning (Fox et al.)2 are two off-policy methods for solving the KL-regularized RL problem, where additional generalized-Ψ and G functions are defined and learned instead of Q. We implement both of these algorithms as well, treating the prior policy as the conditional distribution p(a|s) defined by the trained Note RNN. To the best of our knowledge, this is the first application of KL-regularized off-policy methods with deep neural networks to sequence modeling tasks. The two methods are given below respectively,
L(θ) = Eβ [(log p(a|s) + rMT (s, a)/c+ γ log ∑ a′ eΨ(s ′,a′;θ−) −Ψ(s, a; θ))2] (11)
πθ(a|s) ∝ eΨ(s,a;θ) (12)
L(θ) = Eβ [(rMT /c(s, a) + γ log ∑ a′ elog p(a ′|s′)+G(s′,a′;θ−) −G(s, a; θ))2] (13)
πθ(a|s) ∝ p(a|s)eG(s,a;θ). (14) 2The methods in the original papers are derived for different motivations and presented in different forms as
described in Section 4, but we refer them using their names as the derivations follow closely from the papers.
Both methods can be seen as instances of KL-regularized deep Q-learning, and they also subsume entropy-regularized deep Q-learning by removing the log p(a|s) term. The main difference between the two methods is the definition of the action-value functions generalized-Ψ and G. In fact G-learning can be directly derived from generalized Ψ-learning by reparametrizing Ψ(s, a) = log p(a|s)+G(s, a). TheG-function does not give the policy directly but instead needs to be dynamically mixed with the prior policy probabilities. While this computation is straight-forward for discrete action domains as here, extensions to continuous action domains require additional considerations such as normalizability of advantage function parametrizations (Gu et al., 2016). The KL control-based derivation also has another benefit in that the stochastic policies can be directly used as an exploration strategy, instead of heuristics such as -greedy or additive noise (Mnih et al., 2013; Lillicrap et al., 2016). The derivations for both methods are included in the appendix for completeness.
3.2 MUSIC-THEORY BASED REWARD
A central question of this paper is whether RL can be used to constrain a sequence learner such that the sequences it generates adhere to a desired structure. To test this hypothesis, we developed several rules that we believe describe more pleasant-sounding melodies, taking inspiration from a text on melodic composition (Gauldin, 1995). We do not claim these characteristics are exhaustive, strictly necessary for good composition, or even particularly interesting. They simply serve the purpose of guiding the model towards traditional composition structure. It is therefore crucial to apply the RL Tuner framework to retain the knowledge learned from real songs in the training data.
Following the principles set out on page 42 of Gauldin’s book (Gauldin, 1995), we define the reward function rMT (a, s) to encourage melodies to have the following characteristics. All notes should belong to the same key, and the melody should begin and end with the tonic note of the key; e.g. if the key is C-major, this note would be middle C. This note should occur in the first beat and last 4 beats of the melody. Unless a rest is introduced or a note is held, a single tone should not be repeated more than four3 times in a row. To encourage variety, we penalize the model if the melody is highly correlated with itself at a lag of 1, 2, or 3 beats. The penalty is applied when the auto-correlation coefficient is greater than .15. The melody should avoid awkward intervals like augmented 7ths, or large jumps of more than an octave. Gauldin also indicates good compositions should move by a mixture of small steps and larger harmonic intervals, with emphasis on the former; the reward values for intervals reflect these requirements. When the melody moves with a large interval (a 5th or more) in one direction, it should eventually be resolved by a leap back or gradual movement in the opposite direction. Leaping twice in the same direction is negatively rewarded. The highest note of the melody should be unique, as should the lowest note. Finally, the model is rewarded for playing motifs, which are defined as a succession of notes representing a short musical “idea”; in our implementation, a bar of music with three or more unique notes. Since repetition has been shown to be key to emotional engagement with music (Livingstone et al., 2012), we also sought to train the model to repeat the same motif within a melody.
4 RELATED WORK
Generative modeling of music with RNNs has been explored in a variety of contexts, including generating Celtic folk music (Sturm et al., 2016), or performing Blues improvisation (Eck & Schmidhuber, 2002). Other approaches have examined RNNs with richer expressivity, latent-variables for notes, or raw audio synthesis (Boulanger-Lewandowski et al., 2012; Gu et al., 2015; Chung et al., 2015). Recently, impressive performance in generating music from raw audio has been attained with convolutional neural networks with receptive fields at various time scales (Dieleman et al., 2016).
Although the application of RL to RNNs is a relatively new area, recent work has attempted to combine the two approaches. MIXER (Mixed Incremental Cross-Entropy Reinforce) (Ranzato et al., 2015) uses BLEU score as a reward signal to gradually introduce a RL loss to a text translation model. After initially training the model using cross-entropy, the training process is repeated using cross-entropy loss for the T −∆ tokens in a sequence (where T is the length of the sequence), and
3While the number four can be considered a rough heuristic, avoiding excessively repeated notes and static melodic contours is Gauldin’s first rule of melodic composition (Gauldin, 1995).
using RL for the remainder of the sequence. Another approach (Bahdanau et al., 2016) applies an actor-critic method and uses BLEU score directly to train a critic network to output the value of each word, where the actor is again initialized with the policy of an RNN trained with next-step prediction. Reward-augmented maximum likelihood (Norouzi et al., 2016) augments the standard ML with a sequence-level reward function and connects it with the above RL training methods. These approaches assume that the complete task reward specification is available. They pre-train a good policy with supervised learning so that RL can be used to learn with the true task objective, since training with RL from scratch is difficult. RL Tuner instead only uses rewards to correct certain properties of the generated data, while learning most information from data. This is important since in many sequence modeling applications such as music or language generation, the true reward function is not available or imperfect and ultimately the model should rely on learning from data. The RL Tuner method provides an elegant and flexible framework for correcting undesirable behaviors of RNNs that can arise from limited training data or imperfect training algorithms.
SeqGAN (Yu et al., 2016) applies RL to an RNN by using a discriminator network — similar to those used in Generative Adversarial Networks (GANs) (Goodfellow et al., 2014) — to classify the realism of a complete sequence, and this classifier-based reward is used as a reward signal to the RNN. The approach is applied to a number of generation problems, including music generation. Although the model obtained improved MSE and BLEU scores on the Nottingham music dataset, it is not clear how these scores map to the subjective quality of the samples (Huszár, 2015), and no samples are provided with the paper. In contrast, we provide both samples and quantitative results demonstrating that our approach improves the metrics defined by the reward function. Further, we show that RL Tuner can be used to explicitly correct undesirable behaviors of an RNN, which could be useful in a broad range of applications.
Also related to our work is that of Li and colleagues Li et al. (2016), in which the authors pre-train a model with MLE and then use RL to impose heuristic rules designed to improve the dialog generated by the model. However, after pre-training, only the heuristic rewards are used for further training, which alters the model to optimize only for the heuristic rewards, whereas our approach allows the model to retain information learned from data, while explicitly controlling the trade-off between the influence of data and heuristic reward with the c parameter. While Li and colleagues do use the outputs of the pre-trained model as part of one of the heuristic reward functions, it is only to teach the model to choose dialog turns that minimize the probability that the pre-trained model places on “dull” responses, such as “I don’t know”. However, our approach directly penalizes divergence from the probability distribution learned by the MLE model for every response, allowing the model to retain information about the full space of sequences originally learned from data.
Finally, as discussed in Section 3.1, our approach is related to stochastic optimal control (SOC) (Stengel, 1986) and KL control (Todorov, 2006; Kappen et al., 2012; Rawlik et al., 2012), in particular the two off-policy, model-free methods, Ψ-learning (Rawlik et al., 2012) and Glearning (Fox et al.). Both approaches solve a KL-regularized RL problem, in which a term is introduced to the reward objective to penalize KL divergence from some prior policy. While our methods rely on similar derivations presented in these papers, there are some key differences. First, these techniques have not been applied to DQNs or RNNs, or as a way to fine-tune a pre-trained RNN with additional desired charateristics. Secondly, our methods have different motivations and forms from the original papers: original Ψ-learning (Rawlik et al., 2012) restricts the prior policy to be the policy at the previous iteration and solves the original RL objective with conservative, KL-regularized policy updates, similar to conservative policy gradient methods (Kakade, 2001; Peters et al., 2010; Schulman et al., 2015). The original G-learning (Fox et al.) penalizes divergence from a simple uniform prior policy in order to cope with over-estimation of target Q values, and includes scheduling for the temperature parameter c. Lastly, our work includes the Q-learning objective with additional cross-entropy reward as a comparable alternative, and provides for the first time comparisons among the three methods for incorporating prior knowledge in RL.
5 EXPERIMENTS
To train the Note RNN, we extract monophonic melodies from a corpus of 30,000 MIDI songs. Melodies are quantized at the granularity of a sixteenth note, so each time step corresponds to one sixteenth of a bar of music. We encode a melody using two special events plus three octaves of notes.
The special events are used to introduce rests and notes with longer durations, and are encoded as 0 = note off, 1 = no event. Three octaves of pitches, starting from MIDI pitch 48, are then encoded as 2 = C3, 3 = C#3, 4 = D3, ..., 37 = B5. For example, the sequence {4, 1, 0, 1} encodes an eighth note with pitch D3, followed by an eighth note rest. As the melodies are monophonic, playing another note implicitly ends the last note that was played without requiring an explicit note off event. Thus the sequence {2, 4, 6, 7} encodes a melody of four sixteenth notes: C3, D3, E3, F3. A length-38 one-hot encoding of these values is used for both network input and network output.
The Note RNN consists of one LSTM layer of 100 cells, and was trained for 30,000 iterations with a batch size of 128. Optimization was performed with Adam (Kingma & Ba, 2014), and gradients were clipped to ensure the L2 norm was less than 5. The learning rate was initially set to .5, and a momentum of 0.85 was used to exponentially decay the learning rate every 1000 steps. To regularize the network, a penalty of β = 2.5 × 10−5 was applied to the L2 norm of the network weights. Finally, the losses for the first 8 notes of each sequence were not used to train the model, since it cannot reasonably be expected to accurately predict them with no context. The trained Note RNN eventually obtained a validation accuracy of 92% and a log perplexity score of .2536.
The learned weights of the Note RNN were used to initialize the three sub-networks in the RL Tuner model. Each RL Tuner model was trained for 1,000,000 iterations, using the Adam optimizer, a batch size of 32, and clipping gradients in the same way. The reward discount factor was γ=.5. The Target-Q-network’s weights θ− were gradually updated to be similar to those of the Q-network (θ) according to the formula (1 − η)θ− + ηθ, where η = .01 is the Target-Q-network update rate. We replicated our results for a number of settings for the weight placed on the music-theory rewards, c; we present results for c=.5 below because we believe them to be most musically pleasing. Similarly, we replicated the results using both -greedy and Boltzmann exploration, and present the results using -greedy exploration below.
We compare three methods for implementing RL Tuner: Q-learning, generalized Ψ-learning, and G-learning, where the policy defined by the trained Note RNN is used as the cross entropy reward in Q-learning and the prior policy in G- and generalized Ψ-learning. These approaches are compared to both the original performance of the Note RNN, and a model trained using only RL and no prior policy. Model evaluation is performed every 100,000 training epochs, by generating 100 melodies and assessing the average rMT and log p(a|s). All of the code for RL Tuner, including a checkpointed version of the trained Note RNN is available at https://github.com/natashamjaques/magenta/tree/rl-tuner.
6 RESULTS
Table 1 provides quantitative results in the form of performance on the music theory rules to which we trained the model to adhere; for example, we can assess the fraction of notes played by the model which belonged to the correct key, or the fraction of melodic leaps that were resolved. The statistics were computed by randomly generating 100,000 melodies from each model.
The results above demonstrate that the application of RL is able to correct almost all of the targeted “bad behaviors” of the Note RNN, while improving performance on the desired metrics. For example, the original LSTM model was extremely prone to repeating the same note; after applying RL, we see that the number of notes belonging to some excessively repeated segment has dropped from 63% to nearly 0% in all of the RL Tuner models. While the metrics for the G model did not improve as consistently, the Q and Ψ models successfully learned to play in key, resolve melodic leaps, and play motifs. The number of melodies that start with the tonic note has also increased, melody auto-correlation has decreased, and repeated motifs have increased slightly. The degree of improvement on these metrics is related to the magnitude of the reward given for the behavior. For example, a strong penalty of -100 was applied each time a note was excessively repeated, while a reward of only 3 was applied at the end of a melody for unique extrema notes (which most likely explains the lack of improvement on this metric). The reward values could be adjusted to improve the metrics further, however we found that these values produced the most pleasant melodies.
While the metrics indicate that the targeted behaviors of the RNN have improved, it is not clear whether the models have retained information about the training data. Figure 2a plots the average log p(a|s) as produced by the Reward RNN for melodies generated by the models every 100,000 training epochs; Figure 2b plots the average rMT . Included in the plots is an RL only model trained using only the music theory rewards, with no information about log p(a|s). Since each model is initialized with the weights of the trained Note RNN, we see that as the models quickly learn to adhere to the music theory constraints, log p(a|s) falls from its initial point. For the RL only model, log p(a|s) reaches an average of -3.65, which is equivalent to an average p(a|s) of approximately 0.026. Since there are 38 actions, this represents essentially a random policy with respect to the distribution defined by the Note RNN. Figure 2a shows that each of our models (Q, Ψ, and G) attain higher log p(a|s) values than this baseline, indicating they have maintained information about the data probabilities. The G-learning implementation scores highest on this metric, at the cost of slightly lower average rMT . This compromise between data probability and adherence to music theory could explain the difference in G model’s performance on the music theory metrics in Table 1. Finally, while c = 0.5 produced melodies that sounded better subjectively, we found that by increasing the c parameter it is possible to train all the models to have even higher average log p(a|s).
The question remains whether the RL-tuned models actually produce more pleasing melodies. To answer it, we conducted a user study via Amazon Mechanical Turk in which participants were asked to rate which of two randomly selected melodies they preferred on a Likert scale. A total of 192 ratings were collected; each model was involved in 92 of these comparisons. Figure 3 plots the number of comparisons in which a melody from each model was selected as the most musically pleasing. A Kruskal-Wallis H test of the ratings showed that there was a statistically significant difference between the models, χ2(3) = 109.480, p < 0.001. Mann-Whitney U post-hoc tests revealed that the melodies from all three RL Tuner models (Q, Ψ, and G) had significantly higher ratings than the melodies of the Note RNN, p < .001. The Q and Ψ melodies were also rated as significantly more pleasing than those of the G model, but did not differ significantly from each other. The sample melodies used for the study are available here: goo.gl/XIYt9m; we encourage readers to judge their quality for themselves.
Listening to the samples produced by the Note RNN reveals that they are sometimes dischordant and usually dull; the model tends to place rests frequently, repeat the same note, and produce melodies with little variation. In contrast, the melodies produced by the RL Tuner models are more varied and interesting. The G model tends to produce energetic and chaotic melodies, which include sequences of repeated notes. This repetition is likely because the G policy as defined in Eq. 14 directly mixes p(a|s) with the output of the G network, and the Note RNN strongly favours repeating notes. The most pleasant-sounding melodies are generated by the Q and Ψ models. These melodies stay firmly in key and frequently choose more harmonious in-
terval steps, leading to melodic and pleasant melodies. However, it is clear they have retained information about the training data; for example, the sample q2.wav in the sample directory ends with a seemingly familiar riff.
7 DISCUSSION AND FUTURE WORK
We have derived a novel sequence learning framework which uses RL rewards to correct properties of sequences generated by an RNN, while keeping much of the information learned from supervised training on data. We proposed and evaluated three alternative techniques for achieving this, and showed promising results on music generation tasks.
While we acknowledge that the simple monophonic melodies generated by these models — which are based on overly simplistic rules of melodic composition — do not approach the level of artistic merit of human composers, we believe this study provides a proof-of-concept that encoding domain knowledge using our method can help the outputs of an LSTM adhere to a more consistent structure. The musical complexity of the songs is limited not just by the heuristic rules, but also by the numerical encoding, which cannot represent the dynamics and expressivity of a musical performance. However, although these simple melodies cannot surpass those of human musicians, attempting to train a model to generate aesthetically pleasing outputs in the absence of a better metric of human taste than log-likelihood is a problem of broader interest to the artificial intelligence community.
In addition to the ability to train models to generate pleasant-sounding melodies, we believe our approach of using RL to refine RNN models could be promising for a number of applications. For example, it is well known that a common failure mode of RNNs is to repeatedly generate the same token. In text generation and automatic question answering, this can take the form of repeatedly generating the same response (e.g. “How are you?” → “How are you?” → “How are you?” ...). We have demonstrated that with our approach we can correct for this unwanted behavior, while still maintaining information that the model learned from data. Although manually writing a reward function may seem unappealing to those who believe in training models end-to-end based only on data, that approach it is limited by the quality of the data that can be collected. If the data contains hidden biases, this can lead to highly undesirable consequences. Recent research has shown that the word2vec embeddings in popular language models trained on standard corpora consistently contain the same harmful biases with respect to race and gender that are revealed by implicit association tests on humans (Caliskan-Islam et al., 2016). In contrast to relying solely on possibly biased data, our approach allows for encoding high-level domain knowledge into the RNN, providing a general, alternative tool for training sequence models.
ACKNOWLEDGMENTS
This work was supported by Google Brain, the MIT Media Lab Consortium, and Canada’s Natural Sciences and Engineering Research Council (NSERC). We thank Dzmitry Bahdanau, Greg Wayne, Sergey Levine, and Timothy Lillicrap for helpful discussions on RL and stochastic optimal control.
8 APPENDIX
8.1 OFF-POLICY METHODS DERIVATIONS FOR KL-REGULARIZED REINFORCEMENT LEARNING
Given the KL-regularized RL objective defined in Eq. 9, the value function is given by, V (st;π) = Eπ[ ∑ t′≥t r(st′ , at′)/c− KL[π(·|st′)||p(·|st′)]] (15)
8.1.1 GENERALIZED Ψ-LEARNING
The following derivation is based on modifications to (Rawlik et al., 2012) and resembles the derivation in Fox et al.. We define the generalized Ψ function as,
Ψ(st, at;π) = r(st, at)/c+ log p(at|st) (16) + Ep(st+1|st,at)Eπ[ ∑
t′≥t+1
r(st′ , at′)/c− KL[π(·|st′)||p(·|st′)]] (17)
= r(st, at)/c+ log p(at|st) + Ep(st+1|st,at)[V (st+1;π)] (18) The value function can be expressed as,
V (st;π) = Eπ[Ψ(st, at;π)] + H[π] (19) = Eπ[Ψ(st, at;π)− log π(at|st)] (20)
Fixing Ψ(st, at) = Ψ(st, at;π) and constraining π to be a probability distribution, the optimal greedy policy update π∗ can be derived by functional calculus, along with the corresponding optimal value function,
π∗(at|st) ∝ eΨ(st,at) (21)
V (st;π ∗) = log ∑ at eΨ(st,at) (22)
Given Eq. 18 and 22, the following Bellman optimality equation for generalized Ψ function is derived, and the Ψ-learning loss in Eq. 11 directly follows.
Ψ(st, at;π ∗) = r(st, at)/c+ log p(at|st) + Ep(st+1|st,at)[log ∑ at+1 eΨ(st+1,at+1;π ∗)] (23)
8.1.2 G-LEARNING
The following derivation is based on (Fox et al.) with small modifications. We define theG function as,
G(st, at;π) = r(st, at)/c+ Ep(st+1|st,at)Eπ[ ∑
t′≥t+1
r(st′ , at′)/c− KL[π(·|st′)||p(·|st′)]] (24)
= r(st, at)/c+ Ep(st+1|st,at)[V (st+1;π)] = Ψ(st, at;π)− log p(at|st) (25)
Similar derivation as above can be applied.
V (st;π) = Eπ[G(st, at;π)]− KL[π(·|st′)||p(·|st′)] (26)
= Eπ[G(st, at;π)− log π(at|st) log p(at|st) ] (27)
π∗(at|st) ∝ p(at|st)eG(st,at) (28)
V (st;π ∗) = log ∑ at p(at|st)eG(st,at) (29)
G(st, at;π ∗) = r(st, at)/c+ Ep(st+1|st,at)[log ∑ at+1 p(at+1|st+1)eG(st+1,at+1;π ∗)] (30)
Alternatively, the above expression for G-learning can be derived from Ψ-learning by simple reparametrization with Ψ(s, a) = G(s, a) + log p(a|s) in Eq. 23. | 1. What is the focus of the paper regarding melody synthesis?
2. What are the strengths and weaknesses of the proposed approach compared to previous works?
3. How does the reviewer assess the novelty and utility of the application?
4. Are there any concerns regarding the way the authors describe the model's ability to compose melodies?
5. What are the reviewer's suggestions for improving the paper's motivation and impact? | Review | Review
The authors propose a solution for the task of synthesizing melodies. The authors claim that the "language-model"-type approaches with LSTMs generate melodies with certain shortcomings. They tend to lack long-range structure, to repeat notes etc. To solve this problem the authors suggest that the model could be first trained as a pure LM-style LSTM and then trained with reinforcement learning to optimize an objective which includes some non-differentiable music-theory related constraints.
The reinforcement learning methodology is appropriate but straightforward and closely resembles previous work for text modeling and dialogue generation. By itself the methodology doesn't offer a new technique.
To me, the paper's contribution then comes down to the novelty / utility / impact of the application. The authors clearly put substantial of effort into crafting the rules and user study and that is commendable. On the other hand, music itself is dealt with somewhat naively. While the user study reflects hard work, it seems premature. The semi-plausible piano melodies here are only music in the way that LSTM Shakespeare passes as poetry. So it's analogous to conducting a user study comparing LSTM Shakespeare to n-gram Shakespeare.
I'd caution the author's against the uncritical motivation that a problem has previously been studied. Research contains abundant dead ends (not to say this is necessarily one) and the burden to motivate research shouldn't be forgotten. This is especially true when the application is the primary thrust of a paper.
Generally the authors should be careful about describing this model as "composing". By analogy to a Shakespeare-LSTM, the language model is not really composing English prose. The relationship between constructing a statistical sequence model and creating art - an activity that involves communication grounded in real-world semantics should not be overstated.
I appreciate the authors' efforts to respond to some criticisms of the problem setup and encourage them to anticipate these arguments in the paper and to better motivate the work in the future. If the main contribution is the application (the methods have been used elsewhere), then the motivation is of central importance. I also appreciate their contention that the field benefits from multiple datasets and not simply relying on language modeling. Further, they are correct in asserting that MIDI can capture all the information in a score (not merely "Gameboy music", and that for some musics (e.g. European classical) the score is of central importance. However, the authors may overstate the role of a score in jazz music.
Overall, for me, the application, while fun, doesn't add enough to the impact of the paper. And the methodology, while appropriate, doesn't stand on its own.
--Update-- Thanks for your modifications and arguments. I've revised my scores to add a point. |
ICLR | Title
Tuning Recurrent Neural Networks with Reinforcement Learning
Abstract
The approach of training sequence models using supervised learning and next-step prediction suffers from known failure modes. For example, it is notoriously difficult to ensure multi-step generated sequences have coherent global structure. We propose a novel sequence-learning approach in which we use a pre-trained Recurrent Neural Network (RNN) to supply part of the reward value in a Reinforcement Learning (RL) model. Thus, we can refine a sequence predictor by optimizing for some imposed reward functions, while maintaining good predictive properties learned from data. We propose efficient ways to solve this by augmenting deep Q-learning with a cross-entropy reward and deriving novel off-policy methods for RNNs from KL control. We explore the usefulness of our approach in the context of music generation. An LSTM is trained on a large corpus of songs to predict the next note in a musical sequence. This Note-RNN is then refined using our method and rules of music theory. We show that by combining maximum likelihood (ML) and RL in this way, we can not only produce more pleasing melodies, but significantly reduce unwanted behaviors and failure modes of the RNN, while maintaining information learned from data.
1 INTRODUCTION
Generative modeling of music with deep neural networks is typically accomplished by training a RNN such as a Long Short-Term Memory (LSTM) network to predict the next note in a musical sequence (e.g. Eck & Schmidhuber (2002)). Similar to a Character RNN (Mikolov et al., 2010), these Note RNNs can be used to generate novel melodies by initializing them with a short sequence of notes, then repeatedly sampling from the model’s output distribution generated to obtain the next note. While melodies and text generated in this way have recently garnered attention1, this type of model tends to suffer from common failure modes, such as excessively repeating tokens, or producing sequences that lack a consistent theme or structure. Such sequences can appear wandering and random (see Graves (2013) for a text example).
Music compositions adhere to relatively well-defined structural rules, making music an interesting sequence generation challenge. For example, music theory tells that groups of notes belong to keys, chords follow progressions, and songs have consistent structures made up of musical phrases. Our research question is therefore whether such music-theory-based constraints can be learned by an RNN, while still allowing it to maintain note probabilities learned from data.
To approach this problem we propose RL Tuner, a novel sequence learning approach in which RL is used to impose structure on an RNN trained on data. The reward function in our framework combines task-related rewards with the probability of a given action originally learned by the pre-trained RNN. Thus, our model directly preserves inforamtion about the original probability distributions learned from data, while allowing us to explicitly control the trade-off between the influence of data
1http://www.theverge.com/2016/6/1/11829678/google-magenta-melody-art-generative-artificialintelligence
and heuristic rewards. This is an important novel direction of research, because in many tasks the available reward functions are not a perfect metric that alone will lead to the best task performance in the real world (e.g. BLEU score). Unlike previous work (e.g. (Ranzato et al., 2015), (Bahdanau et al., 2016), (Norouzi et al., 2016), (Li et al., 2016)) we do not use ML training as a way to simply bootstrap the training of an RL model, but rather we rely mainly on information learned from data, and use RL only as a way to refine characteristics of the output by imposing structural rules.
This paper contributes to the sequence training and RL literature by a) proposing a novel method for combining ML and RL training; b) showing the connection between this approach and Stochastic Optimal Control (SOC)/KL-control with a pre-trained RNN as a prior policy; c) showing the explicit relationships among a generalized variant of Ψ-learning (Rawlik et al., 2012), G-learning (Fox et al.), and Q-learning with log prior augmentation; d) being the first work to explore generalized Ψ-learning and G-learning with deep neural networks, serving as a reference for exploring KLregularized RL objectives with deep Q-learning; e) empirically comparing generalized Ψ-learning, G-learning, and Q-learning with log prior augmentation for the first time; and f) applying this new technique to the problem of music generation, and showing through an empirical study that this method produces melodies which are more melodic, harmonious, interesting, and rated as significantly more subjectively pleasing, than those of the original Note RNN. We suggest that the RL Tuner method could have potential applications in a number of areas as a general way to refine existing recurrent models trained on data by imposing constraints on their behavior.
2 BACKGROUND
2.1 DEEP Q-LEARNING
In RL, an agent interacts with an environment. Given the state of the environment at time t, st, the agent takes an action at according to its policy π(at|st), receives a reward r(st, at), and the environment transitions to a new state, st+1.The agent’s goal is to maximize reward over a sequence of actions, with a discount factor of γ applied to future rewards. The optimal deterministic policy π∗ is known to satisfy the following Bellman optimality equation,
Q(st, at;π ∗) = r(st, at) + γEp(st+1|st,at)[maxat+1 Q(st+1, at+1;π ∗)] (1)
where Qπ(st, at) = Eπ[ ∑∞ t′=t γ
t′−tr(st′ , at′)] is the Q function of a policy π. Q-learning techniques (Watkins & Dayan, 1992; Sutton et al., 1999) learn this optimal Q function by iteratively minimizing the Bellman residual. The optimal policy is given by π∗(a|s) = arg maxaQ(s, a). Deep Q-learning(Mnih et al., 2013) uses a neural network called the deep Q-network (DQN) to approximate the Q function Q(s, a; θ). The network parameters θ are learned by applying stochastic gradient descent (SGD) updates with respect to the following loss function,
L(θ) = Eβ [(r(s, a) + γmax a′
Q(s′, a′; θ−)−Q(s, a; θ))2] (2)
where β is the exploration policy, and θ− is the parameters of the Target Q-network (Mnih et al., 2013) that is held fixed during the gradient computation. The moving average of θ is used as θ− as proposed in (Lillicrap et al., 2016). Exploration can be performed with either the -greedy method or Boltzmann sampling. Additional standard techniques such as replay memory (Mnih et al., 2013) and Deep Double Q-learning (Hasselt et al., 2015) are used to stablize and improve learning.
2.2 MUSIC GENERATION WITH LSTM
Previous work with music generation using deep learning (e.g. (Eck & Schmidhuber, 2002), (Sturm et al., 2016)) has involved training an RNN to learn to predict the next note in a monophonic melody; we call this type of model a Note RNN. Often, the Note RNN is implemented using a Long ShortTerm Memory (LSTM) network (Gers et al., 2000). LSTMs are networks in which each recurrent cell learns to control the storage of information through the use of an input gate, output gate, and forget gate. The first two gates control whether information is able to flow into and out of the cell, and the latter controls whether or not the contents of the cell should be reset. Due to these properties, LSTMs are better at learning long-term dependencies in the data, and can adapt more rapidly to new data (Graves, 2013). A softmax function can be applied to the final outputs of the network to obtain
the probability the network places on each note, and softmax cross-entropy loss can be used to train the model via back propagation through time (BPTT) (Graves & Schmidhuber, 2005). However, as previously described, the melodies generated by this model tend to wander, and lack musical structure; we will show that they are also perceived as less musically pleasing by listeners. In the next section, we will show how to improve this model with RL.
3 RL TUNER DESIGN
Given a trained Note RNN, the goal is to teach it concepts about music theory, while still maintaining the information about typical melodies originally learned from data. To accomplish this task, we propose RL Tuner, a novel sequence training method incorporating RL. We use an LSTM trained on data (the Note RNN) to supply the initial weights for three networks in RL Tuner: the Q-network and Target Q-network in the DQN algorithm as described in Section 2.1, and a Reward RNN. Therefore, the Q-network is a recurrent LSTM model, with architecture identical to that of the original Note RNN. The Reward RNN is used to supply part of the reward value used to train the model, and is held fixed during training.
In order to formulate music generation as an RL problem, we treat placing the next note in the melody as taking an action. The state of the environment s consists of the previous note, and the internal state of the LSTM cells of both the Q-network and the Reward RNN. Thus, Q(a, s) can be calculated by initializing the recurrent Q-network with the appropriate memory cell contents, running it for one time step using the previous note, and evaluating the output value for the action a. The next action can be selected with either a Boltzmann sampling or -greedy exploration strategy.
Given action a, the reward can be computed by combining probabilities learned from the training data with knowledge of music theory. We define a set of music-theory based rules (described in Section 3.2) to impose constraints on the melody that the model is composing through a reward signal rMT (a, s). For example, if a note is in the wrong key, then the model receives a negative reward. However, it is necessary that the model still be “creative,” rather than learning a simple melody that can easily exploit these rewards. Therefore, we use the Reward RNN — or equivalently the trained Note RNN — to compute log p(a|s), the log probability of a note a given a melody s, and incorporate this into the reward function. Figure 1 illustrates these ideas.
The total reward given at time t is therefore:
r(s, a) = log p(a|s) + rMT (a, s)/c (3)
where c is a constant controlling the emphasis placed on the music theory reward. Given the DQN loss function in Eq. 2 and modified reward function in Eq. 3, the new loss function and learned policy for RL Tuner are,
L(θ) = Eβ [(log p(a|s) + rMT (a, s)/c+ γmax a′
Q(s′, a′; θ−)−Q(s, a; θ))2] (4)
πθ(a|s) = δ(a = arg max a Q(s, a; θ)). (5)
Thus, the modified loss function forces the model to learn that the most valuable actions are those that conform to the music theory rules, but still have high probability in the original data.
3.1 RELATIONSHIP TO KL CONTROL
The technique described in Section 3 has a close connection to stochastic optimal control (SOC) (Stengel, 1986) and in particular, KL control (Todorov, 2006; Kappen et al., 2012; Rawlik et al., 2012). SOC casts the optimal planning in stochastic environments as inference in graphical models, and enables direct application of probabilistic inference techniques such as ExpectationMaximization (EM) and message passing for solving the control problem (Attias, 2003; Toussaint & Storkey, 2006; Toussaint, 2009). Rawlik et al. (2012); Kappen et al. (2012) then introduced KL control, a generic formulation of the SOC as Kullback-Leibler (KL) divergence minimization, and connected to prior work on RL with additional KL cost (Todorov, 2006). Since our primary focus is to connect with DQNs, we specifically focus on the work by Rawlik et al. (2012) as they derive a temporal-difference-based approach on which we build our methods.
KL control formulation defines a prior dynamics or policy, and derives a variant of the control or RL problem as performing approximate inference in a graphical model. Let τ be a trajectory of state and action sequences, p(τ) be a prior dynamics, and r(τ) be the reward of the trajectory. Then, an additional binary variable b is introduced and a graphical model is defined as p(τ, b) = p(τ)p(b|τ), where p(b = 1|τ) = er(τ)/c and c is the temperature variable. An approximation to p(τ |b = 1) can be derived using the variational free-energy method, and this leads to a cost with a similar form to the RL problem previously defined, but with an additional penalty based on the KL divergence from the prior trajectory,
log p(τ |b = 1) = log ∫ p(τ)p(b|τ)dτ (6)
≥ Eq(τ)[log p(τ)p(b|τ)− log q(τ)] (7) = Eq(τ)[r(τ)/c− KL[q(τ)||p(τ)]] = Lv(q) (8)
where q(τ) is the variational distribution. Rewriting the variational objective Lv(q) in Eq. 6 in terms of policy πθ, we get the following RL objective with KL-regularization, also known as KL control,
Lv(θ) = Eπ[ ∑ t r(st, at)/c−KL[πθ(·|st)||p(·|st)]]. (9)
In contrast, the objective in Section 3 is, Lv(θ) = Eπ[ ∑ t r(st, at)/c+ log p(at|st)]. (10)
The difference is that Eq. 9 includes an entropy regularizer, and thus a different off-policy method from Q-learning is required. A generalization of Ψ-learning (Rawlik et al., 2012), and G-learning (Fox et al.)2 are two off-policy methods for solving the KL-regularized RL problem, where additional generalized-Ψ and G functions are defined and learned instead of Q. We implement both of these algorithms as well, treating the prior policy as the conditional distribution p(a|s) defined by the trained Note RNN. To the best of our knowledge, this is the first application of KL-regularized off-policy methods with deep neural networks to sequence modeling tasks. The two methods are given below respectively,
L(θ) = Eβ [(log p(a|s) + rMT (s, a)/c+ γ log ∑ a′ eΨ(s ′,a′;θ−) −Ψ(s, a; θ))2] (11)
πθ(a|s) ∝ eΨ(s,a;θ) (12)
L(θ) = Eβ [(rMT /c(s, a) + γ log ∑ a′ elog p(a ′|s′)+G(s′,a′;θ−) −G(s, a; θ))2] (13)
πθ(a|s) ∝ p(a|s)eG(s,a;θ). (14) 2The methods in the original papers are derived for different motivations and presented in different forms as
described in Section 4, but we refer them using their names as the derivations follow closely from the papers.
Both methods can be seen as instances of KL-regularized deep Q-learning, and they also subsume entropy-regularized deep Q-learning by removing the log p(a|s) term. The main difference between the two methods is the definition of the action-value functions generalized-Ψ and G. In fact G-learning can be directly derived from generalized Ψ-learning by reparametrizing Ψ(s, a) = log p(a|s)+G(s, a). TheG-function does not give the policy directly but instead needs to be dynamically mixed with the prior policy probabilities. While this computation is straight-forward for discrete action domains as here, extensions to continuous action domains require additional considerations such as normalizability of advantage function parametrizations (Gu et al., 2016). The KL control-based derivation also has another benefit in that the stochastic policies can be directly used as an exploration strategy, instead of heuristics such as -greedy or additive noise (Mnih et al., 2013; Lillicrap et al., 2016). The derivations for both methods are included in the appendix for completeness.
3.2 MUSIC-THEORY BASED REWARD
A central question of this paper is whether RL can be used to constrain a sequence learner such that the sequences it generates adhere to a desired structure. To test this hypothesis, we developed several rules that we believe describe more pleasant-sounding melodies, taking inspiration from a text on melodic composition (Gauldin, 1995). We do not claim these characteristics are exhaustive, strictly necessary for good composition, or even particularly interesting. They simply serve the purpose of guiding the model towards traditional composition structure. It is therefore crucial to apply the RL Tuner framework to retain the knowledge learned from real songs in the training data.
Following the principles set out on page 42 of Gauldin’s book (Gauldin, 1995), we define the reward function rMT (a, s) to encourage melodies to have the following characteristics. All notes should belong to the same key, and the melody should begin and end with the tonic note of the key; e.g. if the key is C-major, this note would be middle C. This note should occur in the first beat and last 4 beats of the melody. Unless a rest is introduced or a note is held, a single tone should not be repeated more than four3 times in a row. To encourage variety, we penalize the model if the melody is highly correlated with itself at a lag of 1, 2, or 3 beats. The penalty is applied when the auto-correlation coefficient is greater than .15. The melody should avoid awkward intervals like augmented 7ths, or large jumps of more than an octave. Gauldin also indicates good compositions should move by a mixture of small steps and larger harmonic intervals, with emphasis on the former; the reward values for intervals reflect these requirements. When the melody moves with a large interval (a 5th or more) in one direction, it should eventually be resolved by a leap back or gradual movement in the opposite direction. Leaping twice in the same direction is negatively rewarded. The highest note of the melody should be unique, as should the lowest note. Finally, the model is rewarded for playing motifs, which are defined as a succession of notes representing a short musical “idea”; in our implementation, a bar of music with three or more unique notes. Since repetition has been shown to be key to emotional engagement with music (Livingstone et al., 2012), we also sought to train the model to repeat the same motif within a melody.
4 RELATED WORK
Generative modeling of music with RNNs has been explored in a variety of contexts, including generating Celtic folk music (Sturm et al., 2016), or performing Blues improvisation (Eck & Schmidhuber, 2002). Other approaches have examined RNNs with richer expressivity, latent-variables for notes, or raw audio synthesis (Boulanger-Lewandowski et al., 2012; Gu et al., 2015; Chung et al., 2015). Recently, impressive performance in generating music from raw audio has been attained with convolutional neural networks with receptive fields at various time scales (Dieleman et al., 2016).
Although the application of RL to RNNs is a relatively new area, recent work has attempted to combine the two approaches. MIXER (Mixed Incremental Cross-Entropy Reinforce) (Ranzato et al., 2015) uses BLEU score as a reward signal to gradually introduce a RL loss to a text translation model. After initially training the model using cross-entropy, the training process is repeated using cross-entropy loss for the T −∆ tokens in a sequence (where T is the length of the sequence), and
3While the number four can be considered a rough heuristic, avoiding excessively repeated notes and static melodic contours is Gauldin’s first rule of melodic composition (Gauldin, 1995).
using RL for the remainder of the sequence. Another approach (Bahdanau et al., 2016) applies an actor-critic method and uses BLEU score directly to train a critic network to output the value of each word, where the actor is again initialized with the policy of an RNN trained with next-step prediction. Reward-augmented maximum likelihood (Norouzi et al., 2016) augments the standard ML with a sequence-level reward function and connects it with the above RL training methods. These approaches assume that the complete task reward specification is available. They pre-train a good policy with supervised learning so that RL can be used to learn with the true task objective, since training with RL from scratch is difficult. RL Tuner instead only uses rewards to correct certain properties of the generated data, while learning most information from data. This is important since in many sequence modeling applications such as music or language generation, the true reward function is not available or imperfect and ultimately the model should rely on learning from data. The RL Tuner method provides an elegant and flexible framework for correcting undesirable behaviors of RNNs that can arise from limited training data or imperfect training algorithms.
SeqGAN (Yu et al., 2016) applies RL to an RNN by using a discriminator network — similar to those used in Generative Adversarial Networks (GANs) (Goodfellow et al., 2014) — to classify the realism of a complete sequence, and this classifier-based reward is used as a reward signal to the RNN. The approach is applied to a number of generation problems, including music generation. Although the model obtained improved MSE and BLEU scores on the Nottingham music dataset, it is not clear how these scores map to the subjective quality of the samples (Huszár, 2015), and no samples are provided with the paper. In contrast, we provide both samples and quantitative results demonstrating that our approach improves the metrics defined by the reward function. Further, we show that RL Tuner can be used to explicitly correct undesirable behaviors of an RNN, which could be useful in a broad range of applications.
Also related to our work is that of Li and colleagues Li et al. (2016), in which the authors pre-train a model with MLE and then use RL to impose heuristic rules designed to improve the dialog generated by the model. However, after pre-training, only the heuristic rewards are used for further training, which alters the model to optimize only for the heuristic rewards, whereas our approach allows the model to retain information learned from data, while explicitly controlling the trade-off between the influence of data and heuristic reward with the c parameter. While Li and colleagues do use the outputs of the pre-trained model as part of one of the heuristic reward functions, it is only to teach the model to choose dialog turns that minimize the probability that the pre-trained model places on “dull” responses, such as “I don’t know”. However, our approach directly penalizes divergence from the probability distribution learned by the MLE model for every response, allowing the model to retain information about the full space of sequences originally learned from data.
Finally, as discussed in Section 3.1, our approach is related to stochastic optimal control (SOC) (Stengel, 1986) and KL control (Todorov, 2006; Kappen et al., 2012; Rawlik et al., 2012), in particular the two off-policy, model-free methods, Ψ-learning (Rawlik et al., 2012) and Glearning (Fox et al.). Both approaches solve a KL-regularized RL problem, in which a term is introduced to the reward objective to penalize KL divergence from some prior policy. While our methods rely on similar derivations presented in these papers, there are some key differences. First, these techniques have not been applied to DQNs or RNNs, or as a way to fine-tune a pre-trained RNN with additional desired charateristics. Secondly, our methods have different motivations and forms from the original papers: original Ψ-learning (Rawlik et al., 2012) restricts the prior policy to be the policy at the previous iteration and solves the original RL objective with conservative, KL-regularized policy updates, similar to conservative policy gradient methods (Kakade, 2001; Peters et al., 2010; Schulman et al., 2015). The original G-learning (Fox et al.) penalizes divergence from a simple uniform prior policy in order to cope with over-estimation of target Q values, and includes scheduling for the temperature parameter c. Lastly, our work includes the Q-learning objective with additional cross-entropy reward as a comparable alternative, and provides for the first time comparisons among the three methods for incorporating prior knowledge in RL.
5 EXPERIMENTS
To train the Note RNN, we extract monophonic melodies from a corpus of 30,000 MIDI songs. Melodies are quantized at the granularity of a sixteenth note, so each time step corresponds to one sixteenth of a bar of music. We encode a melody using two special events plus three octaves of notes.
The special events are used to introduce rests and notes with longer durations, and are encoded as 0 = note off, 1 = no event. Three octaves of pitches, starting from MIDI pitch 48, are then encoded as 2 = C3, 3 = C#3, 4 = D3, ..., 37 = B5. For example, the sequence {4, 1, 0, 1} encodes an eighth note with pitch D3, followed by an eighth note rest. As the melodies are monophonic, playing another note implicitly ends the last note that was played without requiring an explicit note off event. Thus the sequence {2, 4, 6, 7} encodes a melody of four sixteenth notes: C3, D3, E3, F3. A length-38 one-hot encoding of these values is used for both network input and network output.
The Note RNN consists of one LSTM layer of 100 cells, and was trained for 30,000 iterations with a batch size of 128. Optimization was performed with Adam (Kingma & Ba, 2014), and gradients were clipped to ensure the L2 norm was less than 5. The learning rate was initially set to .5, and a momentum of 0.85 was used to exponentially decay the learning rate every 1000 steps. To regularize the network, a penalty of β = 2.5 × 10−5 was applied to the L2 norm of the network weights. Finally, the losses for the first 8 notes of each sequence were not used to train the model, since it cannot reasonably be expected to accurately predict them with no context. The trained Note RNN eventually obtained a validation accuracy of 92% and a log perplexity score of .2536.
The learned weights of the Note RNN were used to initialize the three sub-networks in the RL Tuner model. Each RL Tuner model was trained for 1,000,000 iterations, using the Adam optimizer, a batch size of 32, and clipping gradients in the same way. The reward discount factor was γ=.5. The Target-Q-network’s weights θ− were gradually updated to be similar to those of the Q-network (θ) according to the formula (1 − η)θ− + ηθ, where η = .01 is the Target-Q-network update rate. We replicated our results for a number of settings for the weight placed on the music-theory rewards, c; we present results for c=.5 below because we believe them to be most musically pleasing. Similarly, we replicated the results using both -greedy and Boltzmann exploration, and present the results using -greedy exploration below.
We compare three methods for implementing RL Tuner: Q-learning, generalized Ψ-learning, and G-learning, where the policy defined by the trained Note RNN is used as the cross entropy reward in Q-learning and the prior policy in G- and generalized Ψ-learning. These approaches are compared to both the original performance of the Note RNN, and a model trained using only RL and no prior policy. Model evaluation is performed every 100,000 training epochs, by generating 100 melodies and assessing the average rMT and log p(a|s). All of the code for RL Tuner, including a checkpointed version of the trained Note RNN is available at https://github.com/natashamjaques/magenta/tree/rl-tuner.
6 RESULTS
Table 1 provides quantitative results in the form of performance on the music theory rules to which we trained the model to adhere; for example, we can assess the fraction of notes played by the model which belonged to the correct key, or the fraction of melodic leaps that were resolved. The statistics were computed by randomly generating 100,000 melodies from each model.
The results above demonstrate that the application of RL is able to correct almost all of the targeted “bad behaviors” of the Note RNN, while improving performance on the desired metrics. For example, the original LSTM model was extremely prone to repeating the same note; after applying RL, we see that the number of notes belonging to some excessively repeated segment has dropped from 63% to nearly 0% in all of the RL Tuner models. While the metrics for the G model did not improve as consistently, the Q and Ψ models successfully learned to play in key, resolve melodic leaps, and play motifs. The number of melodies that start with the tonic note has also increased, melody auto-correlation has decreased, and repeated motifs have increased slightly. The degree of improvement on these metrics is related to the magnitude of the reward given for the behavior. For example, a strong penalty of -100 was applied each time a note was excessively repeated, while a reward of only 3 was applied at the end of a melody for unique extrema notes (which most likely explains the lack of improvement on this metric). The reward values could be adjusted to improve the metrics further, however we found that these values produced the most pleasant melodies.
While the metrics indicate that the targeted behaviors of the RNN have improved, it is not clear whether the models have retained information about the training data. Figure 2a plots the average log p(a|s) as produced by the Reward RNN for melodies generated by the models every 100,000 training epochs; Figure 2b plots the average rMT . Included in the plots is an RL only model trained using only the music theory rewards, with no information about log p(a|s). Since each model is initialized with the weights of the trained Note RNN, we see that as the models quickly learn to adhere to the music theory constraints, log p(a|s) falls from its initial point. For the RL only model, log p(a|s) reaches an average of -3.65, which is equivalent to an average p(a|s) of approximately 0.026. Since there are 38 actions, this represents essentially a random policy with respect to the distribution defined by the Note RNN. Figure 2a shows that each of our models (Q, Ψ, and G) attain higher log p(a|s) values than this baseline, indicating they have maintained information about the data probabilities. The G-learning implementation scores highest on this metric, at the cost of slightly lower average rMT . This compromise between data probability and adherence to music theory could explain the difference in G model’s performance on the music theory metrics in Table 1. Finally, while c = 0.5 produced melodies that sounded better subjectively, we found that by increasing the c parameter it is possible to train all the models to have even higher average log p(a|s).
The question remains whether the RL-tuned models actually produce more pleasing melodies. To answer it, we conducted a user study via Amazon Mechanical Turk in which participants were asked to rate which of two randomly selected melodies they preferred on a Likert scale. A total of 192 ratings were collected; each model was involved in 92 of these comparisons. Figure 3 plots the number of comparisons in which a melody from each model was selected as the most musically pleasing. A Kruskal-Wallis H test of the ratings showed that there was a statistically significant difference between the models, χ2(3) = 109.480, p < 0.001. Mann-Whitney U post-hoc tests revealed that the melodies from all three RL Tuner models (Q, Ψ, and G) had significantly higher ratings than the melodies of the Note RNN, p < .001. The Q and Ψ melodies were also rated as significantly more pleasing than those of the G model, but did not differ significantly from each other. The sample melodies used for the study are available here: goo.gl/XIYt9m; we encourage readers to judge their quality for themselves.
Listening to the samples produced by the Note RNN reveals that they are sometimes dischordant and usually dull; the model tends to place rests frequently, repeat the same note, and produce melodies with little variation. In contrast, the melodies produced by the RL Tuner models are more varied and interesting. The G model tends to produce energetic and chaotic melodies, which include sequences of repeated notes. This repetition is likely because the G policy as defined in Eq. 14 directly mixes p(a|s) with the output of the G network, and the Note RNN strongly favours repeating notes. The most pleasant-sounding melodies are generated by the Q and Ψ models. These melodies stay firmly in key and frequently choose more harmonious in-
terval steps, leading to melodic and pleasant melodies. However, it is clear they have retained information about the training data; for example, the sample q2.wav in the sample directory ends with a seemingly familiar riff.
7 DISCUSSION AND FUTURE WORK
We have derived a novel sequence learning framework which uses RL rewards to correct properties of sequences generated by an RNN, while keeping much of the information learned from supervised training on data. We proposed and evaluated three alternative techniques for achieving this, and showed promising results on music generation tasks.
While we acknowledge that the simple monophonic melodies generated by these models — which are based on overly simplistic rules of melodic composition — do not approach the level of artistic merit of human composers, we believe this study provides a proof-of-concept that encoding domain knowledge using our method can help the outputs of an LSTM adhere to a more consistent structure. The musical complexity of the songs is limited not just by the heuristic rules, but also by the numerical encoding, which cannot represent the dynamics and expressivity of a musical performance. However, although these simple melodies cannot surpass those of human musicians, attempting to train a model to generate aesthetically pleasing outputs in the absence of a better metric of human taste than log-likelihood is a problem of broader interest to the artificial intelligence community.
In addition to the ability to train models to generate pleasant-sounding melodies, we believe our approach of using RL to refine RNN models could be promising for a number of applications. For example, it is well known that a common failure mode of RNNs is to repeatedly generate the same token. In text generation and automatic question answering, this can take the form of repeatedly generating the same response (e.g. “How are you?” → “How are you?” → “How are you?” ...). We have demonstrated that with our approach we can correct for this unwanted behavior, while still maintaining information that the model learned from data. Although manually writing a reward function may seem unappealing to those who believe in training models end-to-end based only on data, that approach it is limited by the quality of the data that can be collected. If the data contains hidden biases, this can lead to highly undesirable consequences. Recent research has shown that the word2vec embeddings in popular language models trained on standard corpora consistently contain the same harmful biases with respect to race and gender that are revealed by implicit association tests on humans (Caliskan-Islam et al., 2016). In contrast to relying solely on possibly biased data, our approach allows for encoding high-level domain knowledge into the RNN, providing a general, alternative tool for training sequence models.
ACKNOWLEDGMENTS
This work was supported by Google Brain, the MIT Media Lab Consortium, and Canada’s Natural Sciences and Engineering Research Council (NSERC). We thank Dzmitry Bahdanau, Greg Wayne, Sergey Levine, and Timothy Lillicrap for helpful discussions on RL and stochastic optimal control.
8 APPENDIX
8.1 OFF-POLICY METHODS DERIVATIONS FOR KL-REGULARIZED REINFORCEMENT LEARNING
Given the KL-regularized RL objective defined in Eq. 9, the value function is given by, V (st;π) = Eπ[ ∑ t′≥t r(st′ , at′)/c− KL[π(·|st′)||p(·|st′)]] (15)
8.1.1 GENERALIZED Ψ-LEARNING
The following derivation is based on modifications to (Rawlik et al., 2012) and resembles the derivation in Fox et al.. We define the generalized Ψ function as,
Ψ(st, at;π) = r(st, at)/c+ log p(at|st) (16) + Ep(st+1|st,at)Eπ[ ∑
t′≥t+1
r(st′ , at′)/c− KL[π(·|st′)||p(·|st′)]] (17)
= r(st, at)/c+ log p(at|st) + Ep(st+1|st,at)[V (st+1;π)] (18) The value function can be expressed as,
V (st;π) = Eπ[Ψ(st, at;π)] + H[π] (19) = Eπ[Ψ(st, at;π)− log π(at|st)] (20)
Fixing Ψ(st, at) = Ψ(st, at;π) and constraining π to be a probability distribution, the optimal greedy policy update π∗ can be derived by functional calculus, along with the corresponding optimal value function,
π∗(at|st) ∝ eΨ(st,at) (21)
V (st;π ∗) = log ∑ at eΨ(st,at) (22)
Given Eq. 18 and 22, the following Bellman optimality equation for generalized Ψ function is derived, and the Ψ-learning loss in Eq. 11 directly follows.
Ψ(st, at;π ∗) = r(st, at)/c+ log p(at|st) + Ep(st+1|st,at)[log ∑ at+1 eΨ(st+1,at+1;π ∗)] (23)
8.1.2 G-LEARNING
The following derivation is based on (Fox et al.) with small modifications. We define theG function as,
G(st, at;π) = r(st, at)/c+ Ep(st+1|st,at)Eπ[ ∑
t′≥t+1
r(st′ , at′)/c− KL[π(·|st′)||p(·|st′)]] (24)
= r(st, at)/c+ Ep(st+1|st,at)[V (st+1;π)] = Ψ(st, at;π)− log p(at|st) (25)
Similar derivation as above can be applied.
V (st;π) = Eπ[G(st, at;π)]− KL[π(·|st′)||p(·|st′)] (26)
= Eπ[G(st, at;π)− log π(at|st) log p(at|st) ] (27)
π∗(at|st) ∝ p(at|st)eG(st,at) (28)
V (st;π ∗) = log ∑ at p(at|st)eG(st,at) (29)
G(st, at;π ∗) = r(st, at)/c+ Ep(st+1|st,at)[log ∑ at+1 p(at+1|st+1)eG(st+1,at+1;π ∗)] (30)
Alternatively, the above expression for G-learning can be derived from Ψ-learning by simple reparametrization with Ψ(s, a) = G(s, a) + log p(a|s) in Eq. 23. | 1. What is the main contribution of the paper in terms of combining LSTMs and handcrafted rewards for music generation?
2. What are the strengths and weaknesses of the proposed approach, particularly regarding its ability to scale to more complicated reward functions and its suitability for composing real music?
3. How does the reviewer assess the significance of the results achieved by the proposed method, especially when compared to prior works?
4. What are some potential alternatives to LSTMs that could be explored for music generation tasks, and how might they address some of the limitations of the current approach?
5. How might the paper be improved by incorporating feedback from individuals with musical training and a deeper understanding of music theory? | Review | Review
This paper suggests combining LSTMs, trained on a large midi corpus, with a handcrafted reward function that helps to fine-tune the model in a musically meaningful way. The idea to use hand-crafted rewards in such a way is great and seems promising for practical scenarios, where a musician would like to design a set of rules, rather than a set of melodies.
Even though some choices made along the way seem rather ad-hoc and simplistic from a music theoretical perspective, the results sound like an improvement upon the note RNN baseline, but we also don't know how cherry picked these results are.
I am not convinced that this approach will scale to much more complicated reward functions necessary to compose real music. Maybe LSTMs are the wrong approach altogether if they have so much trouble learning to produce pleasant melodies from such a relatively big corpus of data. Aren't there any alternative differentiable models that are more suitable? What about dilated convolution based approaches?
What I don't like about the paper is that the short melodies are referenced as compositions while being very far from meaningful music, they are not even polyphonic after all. I think it would be great if such papers would be written with the help or feedback of people that have real musical training and are more critical towards these details.
What I like about the paper is that the authors make an effort to understand what is going on, table 1 is interesting for instance. However, Figure 3 should have included real melody excerpts with the same sound synthesis/sample setup. Besides that, more discussion on the shortcomings of the presented method should be added.
In summary, I do like the paper and idea and I can imagine that such RL based fine-tuning approaches will indeed be useful for musicians. Even though the novelty might be limited, the paper serves as a documentation on how to achieve solid results in practice. |
ICLR | Title
Tuning Recurrent Neural Networks with Reinforcement Learning
Abstract
The approach of training sequence models using supervised learning and next-step prediction suffers from known failure modes. For example, it is notoriously difficult to ensure multi-step generated sequences have coherent global structure. We propose a novel sequence-learning approach in which we use a pre-trained Recurrent Neural Network (RNN) to supply part of the reward value in a Reinforcement Learning (RL) model. Thus, we can refine a sequence predictor by optimizing for some imposed reward functions, while maintaining good predictive properties learned from data. We propose efficient ways to solve this by augmenting deep Q-learning with a cross-entropy reward and deriving novel off-policy methods for RNNs from KL control. We explore the usefulness of our approach in the context of music generation. An LSTM is trained on a large corpus of songs to predict the next note in a musical sequence. This Note-RNN is then refined using our method and rules of music theory. We show that by combining maximum likelihood (ML) and RL in this way, we can not only produce more pleasing melodies, but significantly reduce unwanted behaviors and failure modes of the RNN, while maintaining information learned from data.
1 INTRODUCTION
Generative modeling of music with deep neural networks is typically accomplished by training a RNN such as a Long Short-Term Memory (LSTM) network to predict the next note in a musical sequence (e.g. Eck & Schmidhuber (2002)). Similar to a Character RNN (Mikolov et al., 2010), these Note RNNs can be used to generate novel melodies by initializing them with a short sequence of notes, then repeatedly sampling from the model’s output distribution generated to obtain the next note. While melodies and text generated in this way have recently garnered attention1, this type of model tends to suffer from common failure modes, such as excessively repeating tokens, or producing sequences that lack a consistent theme or structure. Such sequences can appear wandering and random (see Graves (2013) for a text example).
Music compositions adhere to relatively well-defined structural rules, making music an interesting sequence generation challenge. For example, music theory tells that groups of notes belong to keys, chords follow progressions, and songs have consistent structures made up of musical phrases. Our research question is therefore whether such music-theory-based constraints can be learned by an RNN, while still allowing it to maintain note probabilities learned from data.
To approach this problem we propose RL Tuner, a novel sequence learning approach in which RL is used to impose structure on an RNN trained on data. The reward function in our framework combines task-related rewards with the probability of a given action originally learned by the pre-trained RNN. Thus, our model directly preserves inforamtion about the original probability distributions learned from data, while allowing us to explicitly control the trade-off between the influence of data
1http://www.theverge.com/2016/6/1/11829678/google-magenta-melody-art-generative-artificialintelligence
and heuristic rewards. This is an important novel direction of research, because in many tasks the available reward functions are not a perfect metric that alone will lead to the best task performance in the real world (e.g. BLEU score). Unlike previous work (e.g. (Ranzato et al., 2015), (Bahdanau et al., 2016), (Norouzi et al., 2016), (Li et al., 2016)) we do not use ML training as a way to simply bootstrap the training of an RL model, but rather we rely mainly on information learned from data, and use RL only as a way to refine characteristics of the output by imposing structural rules.
This paper contributes to the sequence training and RL literature by a) proposing a novel method for combining ML and RL training; b) showing the connection between this approach and Stochastic Optimal Control (SOC)/KL-control with a pre-trained RNN as a prior policy; c) showing the explicit relationships among a generalized variant of Ψ-learning (Rawlik et al., 2012), G-learning (Fox et al.), and Q-learning with log prior augmentation; d) being the first work to explore generalized Ψ-learning and G-learning with deep neural networks, serving as a reference for exploring KLregularized RL objectives with deep Q-learning; e) empirically comparing generalized Ψ-learning, G-learning, and Q-learning with log prior augmentation for the first time; and f) applying this new technique to the problem of music generation, and showing through an empirical study that this method produces melodies which are more melodic, harmonious, interesting, and rated as significantly more subjectively pleasing, than those of the original Note RNN. We suggest that the RL Tuner method could have potential applications in a number of areas as a general way to refine existing recurrent models trained on data by imposing constraints on their behavior.
2 BACKGROUND
2.1 DEEP Q-LEARNING
In RL, an agent interacts with an environment. Given the state of the environment at time t, st, the agent takes an action at according to its policy π(at|st), receives a reward r(st, at), and the environment transitions to a new state, st+1.The agent’s goal is to maximize reward over a sequence of actions, with a discount factor of γ applied to future rewards. The optimal deterministic policy π∗ is known to satisfy the following Bellman optimality equation,
Q(st, at;π ∗) = r(st, at) + γEp(st+1|st,at)[maxat+1 Q(st+1, at+1;π ∗)] (1)
where Qπ(st, at) = Eπ[ ∑∞ t′=t γ
t′−tr(st′ , at′)] is the Q function of a policy π. Q-learning techniques (Watkins & Dayan, 1992; Sutton et al., 1999) learn this optimal Q function by iteratively minimizing the Bellman residual. The optimal policy is given by π∗(a|s) = arg maxaQ(s, a). Deep Q-learning(Mnih et al., 2013) uses a neural network called the deep Q-network (DQN) to approximate the Q function Q(s, a; θ). The network parameters θ are learned by applying stochastic gradient descent (SGD) updates with respect to the following loss function,
L(θ) = Eβ [(r(s, a) + γmax a′
Q(s′, a′; θ−)−Q(s, a; θ))2] (2)
where β is the exploration policy, and θ− is the parameters of the Target Q-network (Mnih et al., 2013) that is held fixed during the gradient computation. The moving average of θ is used as θ− as proposed in (Lillicrap et al., 2016). Exploration can be performed with either the -greedy method or Boltzmann sampling. Additional standard techniques such as replay memory (Mnih et al., 2013) and Deep Double Q-learning (Hasselt et al., 2015) are used to stablize and improve learning.
2.2 MUSIC GENERATION WITH LSTM
Previous work with music generation using deep learning (e.g. (Eck & Schmidhuber, 2002), (Sturm et al., 2016)) has involved training an RNN to learn to predict the next note in a monophonic melody; we call this type of model a Note RNN. Often, the Note RNN is implemented using a Long ShortTerm Memory (LSTM) network (Gers et al., 2000). LSTMs are networks in which each recurrent cell learns to control the storage of information through the use of an input gate, output gate, and forget gate. The first two gates control whether information is able to flow into and out of the cell, and the latter controls whether or not the contents of the cell should be reset. Due to these properties, LSTMs are better at learning long-term dependencies in the data, and can adapt more rapidly to new data (Graves, 2013). A softmax function can be applied to the final outputs of the network to obtain
the probability the network places on each note, and softmax cross-entropy loss can be used to train the model via back propagation through time (BPTT) (Graves & Schmidhuber, 2005). However, as previously described, the melodies generated by this model tend to wander, and lack musical structure; we will show that they are also perceived as less musically pleasing by listeners. In the next section, we will show how to improve this model with RL.
3 RL TUNER DESIGN
Given a trained Note RNN, the goal is to teach it concepts about music theory, while still maintaining the information about typical melodies originally learned from data. To accomplish this task, we propose RL Tuner, a novel sequence training method incorporating RL. We use an LSTM trained on data (the Note RNN) to supply the initial weights for three networks in RL Tuner: the Q-network and Target Q-network in the DQN algorithm as described in Section 2.1, and a Reward RNN. Therefore, the Q-network is a recurrent LSTM model, with architecture identical to that of the original Note RNN. The Reward RNN is used to supply part of the reward value used to train the model, and is held fixed during training.
In order to formulate music generation as an RL problem, we treat placing the next note in the melody as taking an action. The state of the environment s consists of the previous note, and the internal state of the LSTM cells of both the Q-network and the Reward RNN. Thus, Q(a, s) can be calculated by initializing the recurrent Q-network with the appropriate memory cell contents, running it for one time step using the previous note, and evaluating the output value for the action a. The next action can be selected with either a Boltzmann sampling or -greedy exploration strategy.
Given action a, the reward can be computed by combining probabilities learned from the training data with knowledge of music theory. We define a set of music-theory based rules (described in Section 3.2) to impose constraints on the melody that the model is composing through a reward signal rMT (a, s). For example, if a note is in the wrong key, then the model receives a negative reward. However, it is necessary that the model still be “creative,” rather than learning a simple melody that can easily exploit these rewards. Therefore, we use the Reward RNN — or equivalently the trained Note RNN — to compute log p(a|s), the log probability of a note a given a melody s, and incorporate this into the reward function. Figure 1 illustrates these ideas.
The total reward given at time t is therefore:
r(s, a) = log p(a|s) + rMT (a, s)/c (3)
where c is a constant controlling the emphasis placed on the music theory reward. Given the DQN loss function in Eq. 2 and modified reward function in Eq. 3, the new loss function and learned policy for RL Tuner are,
L(θ) = Eβ [(log p(a|s) + rMT (a, s)/c+ γmax a′
Q(s′, a′; θ−)−Q(s, a; θ))2] (4)
πθ(a|s) = δ(a = arg max a Q(s, a; θ)). (5)
Thus, the modified loss function forces the model to learn that the most valuable actions are those that conform to the music theory rules, but still have high probability in the original data.
3.1 RELATIONSHIP TO KL CONTROL
The technique described in Section 3 has a close connection to stochastic optimal control (SOC) (Stengel, 1986) and in particular, KL control (Todorov, 2006; Kappen et al., 2012; Rawlik et al., 2012). SOC casts the optimal planning in stochastic environments as inference in graphical models, and enables direct application of probabilistic inference techniques such as ExpectationMaximization (EM) and message passing for solving the control problem (Attias, 2003; Toussaint & Storkey, 2006; Toussaint, 2009). Rawlik et al. (2012); Kappen et al. (2012) then introduced KL control, a generic formulation of the SOC as Kullback-Leibler (KL) divergence minimization, and connected to prior work on RL with additional KL cost (Todorov, 2006). Since our primary focus is to connect with DQNs, we specifically focus on the work by Rawlik et al. (2012) as they derive a temporal-difference-based approach on which we build our methods.
KL control formulation defines a prior dynamics or policy, and derives a variant of the control or RL problem as performing approximate inference in a graphical model. Let τ be a trajectory of state and action sequences, p(τ) be a prior dynamics, and r(τ) be the reward of the trajectory. Then, an additional binary variable b is introduced and a graphical model is defined as p(τ, b) = p(τ)p(b|τ), where p(b = 1|τ) = er(τ)/c and c is the temperature variable. An approximation to p(τ |b = 1) can be derived using the variational free-energy method, and this leads to a cost with a similar form to the RL problem previously defined, but with an additional penalty based on the KL divergence from the prior trajectory,
log p(τ |b = 1) = log ∫ p(τ)p(b|τ)dτ (6)
≥ Eq(τ)[log p(τ)p(b|τ)− log q(τ)] (7) = Eq(τ)[r(τ)/c− KL[q(τ)||p(τ)]] = Lv(q) (8)
where q(τ) is the variational distribution. Rewriting the variational objective Lv(q) in Eq. 6 in terms of policy πθ, we get the following RL objective with KL-regularization, also known as KL control,
Lv(θ) = Eπ[ ∑ t r(st, at)/c−KL[πθ(·|st)||p(·|st)]]. (9)
In contrast, the objective in Section 3 is, Lv(θ) = Eπ[ ∑ t r(st, at)/c+ log p(at|st)]. (10)
The difference is that Eq. 9 includes an entropy regularizer, and thus a different off-policy method from Q-learning is required. A generalization of Ψ-learning (Rawlik et al., 2012), and G-learning (Fox et al.)2 are two off-policy methods for solving the KL-regularized RL problem, where additional generalized-Ψ and G functions are defined and learned instead of Q. We implement both of these algorithms as well, treating the prior policy as the conditional distribution p(a|s) defined by the trained Note RNN. To the best of our knowledge, this is the first application of KL-regularized off-policy methods with deep neural networks to sequence modeling tasks. The two methods are given below respectively,
L(θ) = Eβ [(log p(a|s) + rMT (s, a)/c+ γ log ∑ a′ eΨ(s ′,a′;θ−) −Ψ(s, a; θ))2] (11)
πθ(a|s) ∝ eΨ(s,a;θ) (12)
L(θ) = Eβ [(rMT /c(s, a) + γ log ∑ a′ elog p(a ′|s′)+G(s′,a′;θ−) −G(s, a; θ))2] (13)
πθ(a|s) ∝ p(a|s)eG(s,a;θ). (14) 2The methods in the original papers are derived for different motivations and presented in different forms as
described in Section 4, but we refer them using their names as the derivations follow closely from the papers.
Both methods can be seen as instances of KL-regularized deep Q-learning, and they also subsume entropy-regularized deep Q-learning by removing the log p(a|s) term. The main difference between the two methods is the definition of the action-value functions generalized-Ψ and G. In fact G-learning can be directly derived from generalized Ψ-learning by reparametrizing Ψ(s, a) = log p(a|s)+G(s, a). TheG-function does not give the policy directly but instead needs to be dynamically mixed with the prior policy probabilities. While this computation is straight-forward for discrete action domains as here, extensions to continuous action domains require additional considerations such as normalizability of advantage function parametrizations (Gu et al., 2016). The KL control-based derivation also has another benefit in that the stochastic policies can be directly used as an exploration strategy, instead of heuristics such as -greedy or additive noise (Mnih et al., 2013; Lillicrap et al., 2016). The derivations for both methods are included in the appendix for completeness.
3.2 MUSIC-THEORY BASED REWARD
A central question of this paper is whether RL can be used to constrain a sequence learner such that the sequences it generates adhere to a desired structure. To test this hypothesis, we developed several rules that we believe describe more pleasant-sounding melodies, taking inspiration from a text on melodic composition (Gauldin, 1995). We do not claim these characteristics are exhaustive, strictly necessary for good composition, or even particularly interesting. They simply serve the purpose of guiding the model towards traditional composition structure. It is therefore crucial to apply the RL Tuner framework to retain the knowledge learned from real songs in the training data.
Following the principles set out on page 42 of Gauldin’s book (Gauldin, 1995), we define the reward function rMT (a, s) to encourage melodies to have the following characteristics. All notes should belong to the same key, and the melody should begin and end with the tonic note of the key; e.g. if the key is C-major, this note would be middle C. This note should occur in the first beat and last 4 beats of the melody. Unless a rest is introduced or a note is held, a single tone should not be repeated more than four3 times in a row. To encourage variety, we penalize the model if the melody is highly correlated with itself at a lag of 1, 2, or 3 beats. The penalty is applied when the auto-correlation coefficient is greater than .15. The melody should avoid awkward intervals like augmented 7ths, or large jumps of more than an octave. Gauldin also indicates good compositions should move by a mixture of small steps and larger harmonic intervals, with emphasis on the former; the reward values for intervals reflect these requirements. When the melody moves with a large interval (a 5th or more) in one direction, it should eventually be resolved by a leap back or gradual movement in the opposite direction. Leaping twice in the same direction is negatively rewarded. The highest note of the melody should be unique, as should the lowest note. Finally, the model is rewarded for playing motifs, which are defined as a succession of notes representing a short musical “idea”; in our implementation, a bar of music with three or more unique notes. Since repetition has been shown to be key to emotional engagement with music (Livingstone et al., 2012), we also sought to train the model to repeat the same motif within a melody.
4 RELATED WORK
Generative modeling of music with RNNs has been explored in a variety of contexts, including generating Celtic folk music (Sturm et al., 2016), or performing Blues improvisation (Eck & Schmidhuber, 2002). Other approaches have examined RNNs with richer expressivity, latent-variables for notes, or raw audio synthesis (Boulanger-Lewandowski et al., 2012; Gu et al., 2015; Chung et al., 2015). Recently, impressive performance in generating music from raw audio has been attained with convolutional neural networks with receptive fields at various time scales (Dieleman et al., 2016).
Although the application of RL to RNNs is a relatively new area, recent work has attempted to combine the two approaches. MIXER (Mixed Incremental Cross-Entropy Reinforce) (Ranzato et al., 2015) uses BLEU score as a reward signal to gradually introduce a RL loss to a text translation model. After initially training the model using cross-entropy, the training process is repeated using cross-entropy loss for the T −∆ tokens in a sequence (where T is the length of the sequence), and
3While the number four can be considered a rough heuristic, avoiding excessively repeated notes and static melodic contours is Gauldin’s first rule of melodic composition (Gauldin, 1995).
using RL for the remainder of the sequence. Another approach (Bahdanau et al., 2016) applies an actor-critic method and uses BLEU score directly to train a critic network to output the value of each word, where the actor is again initialized with the policy of an RNN trained with next-step prediction. Reward-augmented maximum likelihood (Norouzi et al., 2016) augments the standard ML with a sequence-level reward function and connects it with the above RL training methods. These approaches assume that the complete task reward specification is available. They pre-train a good policy with supervised learning so that RL can be used to learn with the true task objective, since training with RL from scratch is difficult. RL Tuner instead only uses rewards to correct certain properties of the generated data, while learning most information from data. This is important since in many sequence modeling applications such as music or language generation, the true reward function is not available or imperfect and ultimately the model should rely on learning from data. The RL Tuner method provides an elegant and flexible framework for correcting undesirable behaviors of RNNs that can arise from limited training data or imperfect training algorithms.
SeqGAN (Yu et al., 2016) applies RL to an RNN by using a discriminator network — similar to those used in Generative Adversarial Networks (GANs) (Goodfellow et al., 2014) — to classify the realism of a complete sequence, and this classifier-based reward is used as a reward signal to the RNN. The approach is applied to a number of generation problems, including music generation. Although the model obtained improved MSE and BLEU scores on the Nottingham music dataset, it is not clear how these scores map to the subjective quality of the samples (Huszár, 2015), and no samples are provided with the paper. In contrast, we provide both samples and quantitative results demonstrating that our approach improves the metrics defined by the reward function. Further, we show that RL Tuner can be used to explicitly correct undesirable behaviors of an RNN, which could be useful in a broad range of applications.
Also related to our work is that of Li and colleagues Li et al. (2016), in which the authors pre-train a model with MLE and then use RL to impose heuristic rules designed to improve the dialog generated by the model. However, after pre-training, only the heuristic rewards are used for further training, which alters the model to optimize only for the heuristic rewards, whereas our approach allows the model to retain information learned from data, while explicitly controlling the trade-off between the influence of data and heuristic reward with the c parameter. While Li and colleagues do use the outputs of the pre-trained model as part of one of the heuristic reward functions, it is only to teach the model to choose dialog turns that minimize the probability that the pre-trained model places on “dull” responses, such as “I don’t know”. However, our approach directly penalizes divergence from the probability distribution learned by the MLE model for every response, allowing the model to retain information about the full space of sequences originally learned from data.
Finally, as discussed in Section 3.1, our approach is related to stochastic optimal control (SOC) (Stengel, 1986) and KL control (Todorov, 2006; Kappen et al., 2012; Rawlik et al., 2012), in particular the two off-policy, model-free methods, Ψ-learning (Rawlik et al., 2012) and Glearning (Fox et al.). Both approaches solve a KL-regularized RL problem, in which a term is introduced to the reward objective to penalize KL divergence from some prior policy. While our methods rely on similar derivations presented in these papers, there are some key differences. First, these techniques have not been applied to DQNs or RNNs, or as a way to fine-tune a pre-trained RNN with additional desired charateristics. Secondly, our methods have different motivations and forms from the original papers: original Ψ-learning (Rawlik et al., 2012) restricts the prior policy to be the policy at the previous iteration and solves the original RL objective with conservative, KL-regularized policy updates, similar to conservative policy gradient methods (Kakade, 2001; Peters et al., 2010; Schulman et al., 2015). The original G-learning (Fox et al.) penalizes divergence from a simple uniform prior policy in order to cope with over-estimation of target Q values, and includes scheduling for the temperature parameter c. Lastly, our work includes the Q-learning objective with additional cross-entropy reward as a comparable alternative, and provides for the first time comparisons among the three methods for incorporating prior knowledge in RL.
5 EXPERIMENTS
To train the Note RNN, we extract monophonic melodies from a corpus of 30,000 MIDI songs. Melodies are quantized at the granularity of a sixteenth note, so each time step corresponds to one sixteenth of a bar of music. We encode a melody using two special events plus three octaves of notes.
The special events are used to introduce rests and notes with longer durations, and are encoded as 0 = note off, 1 = no event. Three octaves of pitches, starting from MIDI pitch 48, are then encoded as 2 = C3, 3 = C#3, 4 = D3, ..., 37 = B5. For example, the sequence {4, 1, 0, 1} encodes an eighth note with pitch D3, followed by an eighth note rest. As the melodies are monophonic, playing another note implicitly ends the last note that was played without requiring an explicit note off event. Thus the sequence {2, 4, 6, 7} encodes a melody of four sixteenth notes: C3, D3, E3, F3. A length-38 one-hot encoding of these values is used for both network input and network output.
The Note RNN consists of one LSTM layer of 100 cells, and was trained for 30,000 iterations with a batch size of 128. Optimization was performed with Adam (Kingma & Ba, 2014), and gradients were clipped to ensure the L2 norm was less than 5. The learning rate was initially set to .5, and a momentum of 0.85 was used to exponentially decay the learning rate every 1000 steps. To regularize the network, a penalty of β = 2.5 × 10−5 was applied to the L2 norm of the network weights. Finally, the losses for the first 8 notes of each sequence were not used to train the model, since it cannot reasonably be expected to accurately predict them with no context. The trained Note RNN eventually obtained a validation accuracy of 92% and a log perplexity score of .2536.
The learned weights of the Note RNN were used to initialize the three sub-networks in the RL Tuner model. Each RL Tuner model was trained for 1,000,000 iterations, using the Adam optimizer, a batch size of 32, and clipping gradients in the same way. The reward discount factor was γ=.5. The Target-Q-network’s weights θ− were gradually updated to be similar to those of the Q-network (θ) according to the formula (1 − η)θ− + ηθ, where η = .01 is the Target-Q-network update rate. We replicated our results for a number of settings for the weight placed on the music-theory rewards, c; we present results for c=.5 below because we believe them to be most musically pleasing. Similarly, we replicated the results using both -greedy and Boltzmann exploration, and present the results using -greedy exploration below.
We compare three methods for implementing RL Tuner: Q-learning, generalized Ψ-learning, and G-learning, where the policy defined by the trained Note RNN is used as the cross entropy reward in Q-learning and the prior policy in G- and generalized Ψ-learning. These approaches are compared to both the original performance of the Note RNN, and a model trained using only RL and no prior policy. Model evaluation is performed every 100,000 training epochs, by generating 100 melodies and assessing the average rMT and log p(a|s). All of the code for RL Tuner, including a checkpointed version of the trained Note RNN is available at https://github.com/natashamjaques/magenta/tree/rl-tuner.
6 RESULTS
Table 1 provides quantitative results in the form of performance on the music theory rules to which we trained the model to adhere; for example, we can assess the fraction of notes played by the model which belonged to the correct key, or the fraction of melodic leaps that were resolved. The statistics were computed by randomly generating 100,000 melodies from each model.
The results above demonstrate that the application of RL is able to correct almost all of the targeted “bad behaviors” of the Note RNN, while improving performance on the desired metrics. For example, the original LSTM model was extremely prone to repeating the same note; after applying RL, we see that the number of notes belonging to some excessively repeated segment has dropped from 63% to nearly 0% in all of the RL Tuner models. While the metrics for the G model did not improve as consistently, the Q and Ψ models successfully learned to play in key, resolve melodic leaps, and play motifs. The number of melodies that start with the tonic note has also increased, melody auto-correlation has decreased, and repeated motifs have increased slightly. The degree of improvement on these metrics is related to the magnitude of the reward given for the behavior. For example, a strong penalty of -100 was applied each time a note was excessively repeated, while a reward of only 3 was applied at the end of a melody for unique extrema notes (which most likely explains the lack of improvement on this metric). The reward values could be adjusted to improve the metrics further, however we found that these values produced the most pleasant melodies.
While the metrics indicate that the targeted behaviors of the RNN have improved, it is not clear whether the models have retained information about the training data. Figure 2a plots the average log p(a|s) as produced by the Reward RNN for melodies generated by the models every 100,000 training epochs; Figure 2b plots the average rMT . Included in the plots is an RL only model trained using only the music theory rewards, with no information about log p(a|s). Since each model is initialized with the weights of the trained Note RNN, we see that as the models quickly learn to adhere to the music theory constraints, log p(a|s) falls from its initial point. For the RL only model, log p(a|s) reaches an average of -3.65, which is equivalent to an average p(a|s) of approximately 0.026. Since there are 38 actions, this represents essentially a random policy with respect to the distribution defined by the Note RNN. Figure 2a shows that each of our models (Q, Ψ, and G) attain higher log p(a|s) values than this baseline, indicating they have maintained information about the data probabilities. The G-learning implementation scores highest on this metric, at the cost of slightly lower average rMT . This compromise between data probability and adherence to music theory could explain the difference in G model’s performance on the music theory metrics in Table 1. Finally, while c = 0.5 produced melodies that sounded better subjectively, we found that by increasing the c parameter it is possible to train all the models to have even higher average log p(a|s).
The question remains whether the RL-tuned models actually produce more pleasing melodies. To answer it, we conducted a user study via Amazon Mechanical Turk in which participants were asked to rate which of two randomly selected melodies they preferred on a Likert scale. A total of 192 ratings were collected; each model was involved in 92 of these comparisons. Figure 3 plots the number of comparisons in which a melody from each model was selected as the most musically pleasing. A Kruskal-Wallis H test of the ratings showed that there was a statistically significant difference between the models, χ2(3) = 109.480, p < 0.001. Mann-Whitney U post-hoc tests revealed that the melodies from all three RL Tuner models (Q, Ψ, and G) had significantly higher ratings than the melodies of the Note RNN, p < .001. The Q and Ψ melodies were also rated as significantly more pleasing than those of the G model, but did not differ significantly from each other. The sample melodies used for the study are available here: goo.gl/XIYt9m; we encourage readers to judge their quality for themselves.
Listening to the samples produced by the Note RNN reveals that they are sometimes dischordant and usually dull; the model tends to place rests frequently, repeat the same note, and produce melodies with little variation. In contrast, the melodies produced by the RL Tuner models are more varied and interesting. The G model tends to produce energetic and chaotic melodies, which include sequences of repeated notes. This repetition is likely because the G policy as defined in Eq. 14 directly mixes p(a|s) with the output of the G network, and the Note RNN strongly favours repeating notes. The most pleasant-sounding melodies are generated by the Q and Ψ models. These melodies stay firmly in key and frequently choose more harmonious in-
terval steps, leading to melodic and pleasant melodies. However, it is clear they have retained information about the training data; for example, the sample q2.wav in the sample directory ends with a seemingly familiar riff.
7 DISCUSSION AND FUTURE WORK
We have derived a novel sequence learning framework which uses RL rewards to correct properties of sequences generated by an RNN, while keeping much of the information learned from supervised training on data. We proposed and evaluated three alternative techniques for achieving this, and showed promising results on music generation tasks.
While we acknowledge that the simple monophonic melodies generated by these models — which are based on overly simplistic rules of melodic composition — do not approach the level of artistic merit of human composers, we believe this study provides a proof-of-concept that encoding domain knowledge using our method can help the outputs of an LSTM adhere to a more consistent structure. The musical complexity of the songs is limited not just by the heuristic rules, but also by the numerical encoding, which cannot represent the dynamics and expressivity of a musical performance. However, although these simple melodies cannot surpass those of human musicians, attempting to train a model to generate aesthetically pleasing outputs in the absence of a better metric of human taste than log-likelihood is a problem of broader interest to the artificial intelligence community.
In addition to the ability to train models to generate pleasant-sounding melodies, we believe our approach of using RL to refine RNN models could be promising for a number of applications. For example, it is well known that a common failure mode of RNNs is to repeatedly generate the same token. In text generation and automatic question answering, this can take the form of repeatedly generating the same response (e.g. “How are you?” → “How are you?” → “How are you?” ...). We have demonstrated that with our approach we can correct for this unwanted behavior, while still maintaining information that the model learned from data. Although manually writing a reward function may seem unappealing to those who believe in training models end-to-end based only on data, that approach it is limited by the quality of the data that can be collected. If the data contains hidden biases, this can lead to highly undesirable consequences. Recent research has shown that the word2vec embeddings in popular language models trained on standard corpora consistently contain the same harmful biases with respect to race and gender that are revealed by implicit association tests on humans (Caliskan-Islam et al., 2016). In contrast to relying solely on possibly biased data, our approach allows for encoding high-level domain knowledge into the RNN, providing a general, alternative tool for training sequence models.
ACKNOWLEDGMENTS
This work was supported by Google Brain, the MIT Media Lab Consortium, and Canada’s Natural Sciences and Engineering Research Council (NSERC). We thank Dzmitry Bahdanau, Greg Wayne, Sergey Levine, and Timothy Lillicrap for helpful discussions on RL and stochastic optimal control.
8 APPENDIX
8.1 OFF-POLICY METHODS DERIVATIONS FOR KL-REGULARIZED REINFORCEMENT LEARNING
Given the KL-regularized RL objective defined in Eq. 9, the value function is given by, V (st;π) = Eπ[ ∑ t′≥t r(st′ , at′)/c− KL[π(·|st′)||p(·|st′)]] (15)
8.1.1 GENERALIZED Ψ-LEARNING
The following derivation is based on modifications to (Rawlik et al., 2012) and resembles the derivation in Fox et al.. We define the generalized Ψ function as,
Ψ(st, at;π) = r(st, at)/c+ log p(at|st) (16) + Ep(st+1|st,at)Eπ[ ∑
t′≥t+1
r(st′ , at′)/c− KL[π(·|st′)||p(·|st′)]] (17)
= r(st, at)/c+ log p(at|st) + Ep(st+1|st,at)[V (st+1;π)] (18) The value function can be expressed as,
V (st;π) = Eπ[Ψ(st, at;π)] + H[π] (19) = Eπ[Ψ(st, at;π)− log π(at|st)] (20)
Fixing Ψ(st, at) = Ψ(st, at;π) and constraining π to be a probability distribution, the optimal greedy policy update π∗ can be derived by functional calculus, along with the corresponding optimal value function,
π∗(at|st) ∝ eΨ(st,at) (21)
V (st;π ∗) = log ∑ at eΨ(st,at) (22)
Given Eq. 18 and 22, the following Bellman optimality equation for generalized Ψ function is derived, and the Ψ-learning loss in Eq. 11 directly follows.
Ψ(st, at;π ∗) = r(st, at)/c+ log p(at|st) + Ep(st+1|st,at)[log ∑ at+1 eΨ(st+1,at+1;π ∗)] (23)
8.1.2 G-LEARNING
The following derivation is based on (Fox et al.) with small modifications. We define theG function as,
G(st, at;π) = r(st, at)/c+ Ep(st+1|st,at)Eπ[ ∑
t′≥t+1
r(st′ , at′)/c− KL[π(·|st′)||p(·|st′)]] (24)
= r(st, at)/c+ Ep(st+1|st,at)[V (st+1;π)] = Ψ(st, at;π)− log p(at|st) (25)
Similar derivation as above can be applied.
V (st;π) = Eπ[G(st, at;π)]− KL[π(·|st′)||p(·|st′)] (26)
= Eπ[G(st, at;π)− log π(at|st) log p(at|st) ] (27)
π∗(at|st) ∝ p(at|st)eG(st,at) (28)
V (st;π ∗) = log ∑ at p(at|st)eG(st,at) (29)
G(st, at;π ∗) = r(st, at)/c+ Ep(st+1|st,at)[log ∑ at+1 p(at+1|st+1)eG(st+1,at+1;π ∗)] (30)
Alternatively, the above expression for G-learning can be derived from Ψ-learning by simple reparametrization with Ψ(s, a) = G(s, a) + log p(a|s) in Eq. 23. | 1. What is the main contribution of the paper regarding likelihood and reward-based learning?
2. How does the proposed method differ from previous approaches in optimal control and reinforcement learning?
3. What are the strengths and weaknesses of the paper's discussion on optimal control and its history?
4. Can the approach used in the paper mitigate against underfitting in generation tasks?
5. How does the choice of E_pi \log p(a_t|s_t) impact the policy and model confidence?
6. Is there a possibility of approximating the music theory reward using a differentiable model instead of an RL approach? | Review | Review
This paper uses a combination of likelihood and reward based learning to learn sequence models for music. The ability to combine likelihood and reward based learning has been long known, as a result of the unification of inference and learning first appearing in the ML literature with the EM formalism of Attias (2003) for fixed horizons, extended by Toussaint and Storkey (2006), to general horizon settings, Toussaint et al. (2011) to POMDPs and generalised further by Kappen et Al. (2012) and Rawlik et Al. (2012). These papers introduced the basic unification, and so any additional probabilistic or data driven objective can be combined with the reinforcement learning signal: it is all part of a unified reward/likelihood. Hence the optimal control target under unification is p(b=1|\tau)E_p(A,S) \prod_t \pi(a_t|s_t): i.e. the probability of getting reward, and probability of the policy actions under the known data-derived distribution, thereby introducing the log p(a_t|s_t) into (9) too.
The interpretation of the secondary objective as the prior is an alternative approach under a stochastic optimal control setting, but not the most natural one given the whole principle of SOC of matching control objectives to inference objectives. The SOC off policy objective still does still contain the KL term so the approach would still differ from the approach of this paper.
Though the discussion of optimal control is good, I think some further elaboration of the history and how reward augmentation can work in SOC would be valuable. This would allow SOC off-policy methods to be compared with the DQN directly, like for like.
The motivation of the objective (3) is sensible but could be made clearer via the unification argument above. Then the paper uses DCN to take a different approach from the variational SOC for achieving that objective.
Another interesting point of discussion is the choice of E_pi \log p(a_t|s_t) – this means the policy must “cover” the model. But one problem in generation is that a well-trained model is often underfit, resulting in actions that, over the course of a number of iterations, move the state into data-unsupported parts of the space. As a result the model is no longer confident and quickly tends to be fairly random. This approach (as opposed to a KL(p||pi) – which is not obvious how to implement) cannot mitigate against that, without a very strong signal (to overcome the tails of a distribution). In music, with a smaller discrete alphabet, this is likely to be less of a problem than for real valued policy densities, with exponentially decaying tails. Some further discussion of what you see in light of this issue would be valuable: the use of c to balance things seems critical, and it seems clear from Figure 2 that the reward signal needed to be very high to push the log p signal into the right range.
Altogether, in the music setting this paper provides a reasonable demonstration that augmentation of a sequence model with an additional reward constraint is valuable. It demonstrates that DQN is one way of learning that signal, but AFAICS it does not compare learning the same signal via other techniques. Instead for the comparator techniques it reverts to treating the p(a|s) as a “prior” term rather than a reward term, leaving a bit of a question as to whether DQN is particularly appropriate.
Another interesting question for the discussion is whether the music theory reward could be approximated by a differentiable model, mitigating the need for an RL approach at all. |
ICLR | Title
Evaluating The Search Phase of Neural Architecture Search
Abstract
Neural Architecture Search (NAS) aims to facilitate the design of deep networks for new tasks. Existing techniques rely on two stages: searching over the architecture space and validating the best architecture. NAS algorithms are currently compared solely based on their results on the downstream task. While intuitive, this fails to explicitly evaluate the effectiveness of their search strategies. In this paper, we propose to evaluate the NAS search phase. To this end, we compare the quality of the solutions obtained by NAS search policies with that of random architecture selection. We find that: (i) On average, the state-of-the-art NAS algorithms perform similarly to the random policy; (ii) the widely-used weight sharing strategy degrades the ranking of the NAS candidates to the point of not reflecting their true performance, thus reducing the effectiveness of the search process. We believe that our evaluation framework will be key to designing NAS strategies that consistently discover architectures superior to random ones.
1 INTRODUCTION
By automating the design of a neural network for the task at hand, Neural Architecture Search (NAS) has tremendous potential to impact the practicality of deep learning (Zoph & Le, 2017; Liu et al., 2018b;a; Tan et al., 2018; Baker et al., 2016), and has already obtained state-of-the-art performance on many tasks. A typical NAS technique (Zoph & Le, 2017; Pham et al., 2018; Liu et al., 2018a) has two stages: the search phase, which aims to find a good architecture, and the evaluation one, where the best architecture is trained from scratch and validated on the test data.
In the literature, NAS algorithms are typically compared based on their results in the evaluation phase. While this may seem intuitive, the search phase of these algorithms often differ in several ways, such as their architecture sampling strategy and the search space they use, and the impact of these individual factors cannot be identified by looking at the downstream task results only. Furthermore, the downstream task results are often reported for a single random seed, which leaves unanswered the question of robustness of the search strategies. ∗Equal contribution †Work done at Swisscom digital lab.
In this paper, we therefore propose to investigate the search phase of existing NAS algorithms in a controlled manner. To this end, we compare the quality of the NAS solutions with a random search policy, which uniformly randomly samples an architecture from the same search space as the NAS algorithms, and then trains it using the same hyper-parameters as the NAS solutions. To reduce randomness, the search using each policy, i.e., random and NAS ones, is repeated several times, with different random seeds.
We perform a series of experiments on the Penn Tree Bank (PTB) (Marcus et al., 1994a) and CIFAR10 (Krizhevsky et al., 2009) datasets, in which we compared the state-of-the-art NAS algorithms whose code is publicly available—DARTS (Liu et al., 2019b), NAO (Luo et al., 2018) and ENAS (Pham et al., 2018)—to our random policy. We reached the surprising conclusions that, as shown in Table 1, none of them significantly outperforms random sampling. Since the mean performance for randomlysampled architectures converges to the mean performance over the entire search space, we further conducted Welch Student’s t-tests (Welch, 1947), which reveal that, in RNN space, ENAS and DARTS cannot be differentiated from the mean of entire search space, while NAO yields worse performance than random sampling. While the situation is slightly better in CNN space, all three algorithms still perform similarly to random sampling. Note that this does not necessarily mean that these algorithms perform poorly, but rather that the search space has been sufficiently constrained so that even a random architecture in this space provides good results. To verify this, we experiment with search spaces where we can exhaustively evaluate all architectures, and observe that these algorithms truly cannot discover top-performing architectures.
In addition to this, we observed that the ranking by quality of candidate architectures produced by the NAS algorithms during the search does not reflect the true performance of these architectures in the evaluation phase. Investigating this further allowed us to identify that weight sharing (Pham et al., 2018), widely adopted to reduce the amount of required resources from thousands of GPU days to a single one, harms the individual networks’ performance. More precisely, using reduced search spaces, we make use of the Kendall Tau τ metric1 to show that the architecture rankings obtained with and without weight sharing are entirely uncorrelated in RNN space (τ = -0.004 over 10 runs); and have little correlation in the CNN space (τ = 0.195 over 10 runs). Since such a ranking is usually treated as training data for the NAS sampler in the search phase, this further explains the small margin between random search and the NAS algorithms. We also show that training samplers without weight sharing in CNN space surpasses random sampling by a significant margin.
In other words, we disprove the common belief that the quality of architectures trained with and without weight sharing is similar. We show that the difference in ranking negatively impacts the search phase of NAS algorithms, thus seriously impeding their robustness and performance.
In short, evaluating the search phase of NAS, which is typically ignored, allowed us to identify two key characteristics of state-of-the-art NAS algorithms: The importance of the search space and the negative impact of weight sharing. We believe that our evaluation framework will be instrumental in designing NAS search strategies that are superior to the random one. Our code is publicly available at https://github.com/kcyu2014/eval-nas.
2 RELATED WORK
Since its introduction in (Zoph & Le, 2017), NAS has demonstrated great potential to surpass the human design of deep networks for both visual recognition (Liu et al., 2018b; Ahmed & Torresani, 2018; Chen et al., 2018; Pérez-Rúa et al., 2018; Liu et al., 2019a) and natural language processing (Zoph & Le, 2017; Pham et al., 2018; Luo et al., 2018; Zoph et al., 2018; Liu et al., 2018b; Cai et al., 2018a). Existing search strategies include reinforcement learning (RL) samplers (Zoph & Le, 2017; Zoph et al., 2018; Pham et al., 2018), evolutionary algorithms (Xie & Yuille, 2017; Real et al., 2017; Miikkulainen et al., 2019; Liu et al., 2018b; Lu et al., 2018), gradient-descent (Liu et al., 2019b), bayesian optimization (Kandasamy et al., 2018; Jin et al., 2019; Zhou et al., 2019) and performance predictors (Liu et al., 2018a; Luo et al., 2018). Here, our goal is not to introduce a new search policy, but rather to provide the means to analyze existing ones. Below, we briefly discuss existing NAS methods and focus on how they are typically evaluated.
Neural architecture search with weight sharing. The potential of vanilla NAS comes with the drawback of requiring thousands of GPU hours even for small datasets, such as PTB and CIFAR-10.
1The Kendall Tau (Kendall, 1938) metric measures the correlation of two ranking. Details in Appendix A.1.
Furthermore, even when using such heavy computational resources, vanilla NAS has to restrict the number of trained architectures from a total of 109 to 104, and increasing the sampler accuracy can only be achieved by increasing the resources.
ENAS (Pham et al., 2018) was the first to propose a training scheme with shared parameters, reducing the resources from thousands of GPU days to one. Instead of being trained from scratch each sampled model inherits the parameters from previously-trained ones. Since then, NAS research has mainly focused on two directions: 1) Replacing the RL sampler with a better search algorithm, such as gradient descent (Liu et al., 2019b), bayesian optimiziation (Zhou et al., 2019) and performance predictors (Luo et al., 2018); 2) Exploiting NAS for other applications, e.g., object detection (Ghiasi et al., 2019; Chen et al., 2019), semantic segmentation (Liu et al., 2019a), and finding compact networks (Cai et al., 2018b; Wu et al., 2018; Chu et al., 2019; Guo et al., 2019).
Characterizing the search space. Ying et al. (2019); Dong & Yang (2020) introduced a dataset that contains the ground-truth performance of CNN cells, and Wang et al. (2019) evaluated some traditional search algorithms on it. Similarly, Radosavovic et al. (2019) characterizes many CNN search spaces by computing the statistics of a set of sampled architectures, revealing that, for datasets such as CIFAR-10 or ImageNet, these statistics are similar. While these works support our claim that evaluation of NAS algorithms is crucial, they do not directly evaluate the state-of-the-arts NAS algorithms as we do here.
Evaluation of NAS algorithms. Typically, the quality of NAS algorithms is judged based on the results of the final architecture they produce on the downstream task. In other words, the search and robustness of these algorithms are generally not studied, with (Liu et al., 2019b; So et al., 2019) the only exception for robustness, where results obtained with different random seeds were reported. Here, we aim to further the understanding of the mechanisms behind the search phase of NAS algorithms. Specifically, we propose doing so by comparing them with a simple random search policy, which uniformly randomly samples one architecture per run in the same search space as the NAS techniques.
While some works have provided partial comparisons to random search, these comparisons unfortunately did not give a fair chance to the random policy. Specifically, (Pham et al., 2018) reports the results of only a single random architecture, and (Liu et al., 2018b) those of an architecture selected among 8 randomly sampled ones as the most promising one after training for 300 epochs only. Here, we show that a fair comparison to the random policy, obtained by training all architectures, i.e., random and NAS ones, for 1000 epochs and averaging over multiple random seeds for robustness, yields a different picture; the state-of-the-art search policies are no better than the random one.
The motivation behind this comparison was our observation of only a weak correlation between the performance of the searched architectures and the ones trained from scratch during the evaluation phase. This phenomenon was already noticed by Zela et al. (2018), and concurrently to our work by Li & Talwalkar (2019); Xie et al. (2019); Ying et al. (2019), but the analysis of its impact or its causes went no further. Here, by contrast, we link this difference in performance between the search and evaluation phases to the use of weight sharing.
While this may seem to contradict the findings of Bender et al. (2018), which, on CIFAR-10, observed a strong correlation between architectures trained with and without weight sharing when searching a CNN cell, our work differs from (Bender et al., 2018) in two fundamental ways: 1) The training scheme in (Bender et al., 2018), in which the entire model with shared parameters is trained via random path dropping, is fundamentally different from those used by state-of-the-arts weight sharing NAS strategies (Pham et al., 2018; Liu et al., 2019b; Luo et al., 2018); 2) While the correlation in (Bender et al., 2018) was approximated using a small subset of sampled architectures, we make use of a reduced search space where we can perform a complete evaluation of all architectures, thus providing an exact correlation measure in this space.
3 EVALUATING THE NAS SEARCH
In this section, we detail our evaluation framework for the NAS search phase. As depicted in Fig. 1(a,b), typical NAS algorithms consist of two phases:
• Search: The goal of this phase is to find the best candidate architecture from the search space2. This is where existing algorithms, such as ENAS, DARTS and NAO, differ. Nevertheless, for all the algorithms, the search depends heavily on initialization. In all the studied policies, initialization is random and the outcome thus depends on the chosen random seed.
• Evaluation: In this phase, all the studied algorithms retrain the best model found in the search phase. The retrained model is then evaluated on the test data.
The standard evaluation of NAS techniques focuses solely on the final results on the test data. Here, by contrast, we aim to evaluate the search phase itself, which truly differentiates existing algorithms.
To do this, as illustrated in Fig. 1(c), we establish a baseline; we compare the search phase of existing algorithms with a random search policy. An effective search algorithm should yield a solution that clearly outperforms the random policy. Below, we introduce our framework to compare NAS search algorithms with random search. The three NAS algorithms that we evaluated, DARTS (Liu et al., 2019b), NAO (Luo et al., 2018) and ENAS (Pham et al., 2018), are representative of the state of the art for different search algorithms: reinforcement learning, gradient-descent and performance prediction, and are discussed in Appendix C.
3.1 COMPARING TO RANDOM SEARCH
We implement our random search policy by simply assigning uniform probabilities to all operations. Then, for each node in the Directed Acyclic Graph (DAG) that is typically used to represent an architecture, we randomly sample a connection to one previous node from the resulting distributions.
An effective search policy should outperform the random one. To evaluate this, we compute the validation results of the best architecture found by the NAS algorithm trained from scratch, as well as those of a single randomly sampled architecture. Comparing these values for a single random seed would of course not provide a reliable measure. Therefore, we repeat this process for multiple random seeds used both during the search phase of the NAS algorithm and to sample one random architecture as described above. We then report the means and standard deviations of these results over the different seeds. Note that while we use different seeds for the search and random sampling, we always use the same seed when training the models from scratch during the evaluation phase.
Our use of multiple random seeds and of the same number of epochs for the NAS algorithms and for our random search policy makes the comparison fair. This contrasts with the comparisons performed in (Pham et al., 2018), where the results of only a single random architecture were reported, and
2Details about search spaces are provided in Appendix B.
in (Liu et al., 2019b), which selected a single best random architecture among an initial set of 8 after training for 300 epochs only. As shown in Appendix D.2, some models that perform well in the early training stages may yield worse performance than others after convergence. Therefore, choosing the best random architecture after only 300 epochs for PTB and 100 for CIFAR-10, and doing so for a single random seed, might not be representative of the general behavior.
3.2 SEARCH IN A REDUCED SPACE
Because of the size of standard search spaces, one cannot understand the quality of the search by fully evaluating all possible solutions. Hence, we propose to make use of reduced search spaces with ground-truth architecture performances available to evaluate the search quality. For RNNs, we simply reduce the number of nodes in the search space from 12 to 2. Given that each node is identified by two values, the ID of the incoming node and the activation function, the space has a cardinality |S| = n! ∗ |O|n, where n = 2 nodes and |O| = 4 operations, thus yielding 32 possible solutions. To obtain ground truth, we train all of these architectures individually. Each architecture is trained 10 times with a different seed, which therefore yields a mean and standard deviation of its performance. The mean value is used as ground truth—the actual potential of the given architecture. These experiments took around 5000 GPU hours.
For CNNs, we make use of NASBench-101 (Ying et al., 2019), a CNN graph-based search space with 3 possible operations, conv3x3, conv1x1 and max3x3. This framework defines search spaces with between 3 and 7 nodes, with 423,624 architectures in 7-node case. To the best of our knowledge, we are the first to evaluate the NAS methods used in this paper on NASBench.
4 EXPERIMENTAL RESULTS
To analyze the search phase of the three state-of-the-art NAS algorithms mentioned above, we first compare these algorithms to our random policy when using standard search spaces for RNNs on (PTB) and CNNs on CIFAR-10. Details about the experiment setting are in Appendix C.5. The surprising findings in this typical NAS use case prompted us to study the behavior of the search strategies in reduced search spaces. This allowed us to identify a factor that has a significant impact on the observed results: Weight sharing. We then quantify this impact on the ranking of the NAS candidates, evidencing that it dramatically affects the effectiveness of the search.
4.1 NAS COMPARISON IN A STANDARD SEARCH SPACE
Below, we compare DARTS (Liu et al., 2019b), NAO (Luo et al., 2018), ENAS (Pham et al., 2018) and BayesNAS (Zhou et al., 2019) with our random search policy, as discussed in Section 3.1. We follow (Liu et al., 2019b) to define an RNN search space of 12 nodes and a CNN ones of 7 nodes. For each of the four search policies, we run 10 experiments with a different initialization of the sampling policy. During the search phase, we used the authors-provided hyper-parameters and code for each policy. Once a best architecture is identified by the search phase, it is used for evaluation, i.e., we train the chosen architecture from scratch for 1000 epochs for RNN and 600 for CNN.
RNN Results. In Figure 2, we plot, on the left, the mean perplexity evolution over the 1000 epochs, obtained by averaging the results of the best architectures found using the 10 consecutive seeds.3 On the right, we show the perplexity evolution for the best cell of each strategy among the 10 different runs. Random sampling is robust and consistently competitive. As shown in Table 1, it outperforms on average the DARTS and NAO policies, and yields the overall best cell for these experiments with perplexity
3Starting from 1268, which is right after 1267, the seed released by Liu et al. (2019b). Note that, using this seed, we can reproduce the DARTS RNN search and obtain a validation PPL of 55.7 as in (Liu et al., 2019b).
of 57.60. Further training this cell for 4000 epochs, as in (Liu et al., 2019b), yields a perplexity of 55.93. The excellent performance of the random policy evidences the high expressiveness of the manually-constructed search space; even arbitrary policies in this space perform well, as evidenced by the relatively low standard deviation over the 10 seeds of the random architectures, shown in Table 1 and Figure 2(left).
CNN Results. In Table 2, we compare the NAS methods with our random policy in the search space of Liu et al. (2019b). We provide the accuracy reported in the original papers as well as the accuracy we reproduced using our implementation. Note that the NAS algorithms only marginally outperform random search, by less than 0.5% in top-1 accuracy. The best architecture was discovered by NAO, with an accuracy of 97.10%, again less than 0.5% higher than the randomly discovered one. Note that, our random sampling comes at no search cost. By contrast, Li & Talwalkar (2019) obtained an accuracy of 97.15% with a different random search policy having the same cost as DARTS.
Observations:
• The evaluated state-of-the-art NAS algorithms do not surpass random search by a significant margin, and even perform worse in the RNN search space. • The ENAS policy sampler has the lowest variance among the three tested ones. This shows that ENAS is more robust to the variance caused by the random seed of the search phase. • The NAO policy is more sensitive to the search space; while it yields the best performance in CNN space, it performs the worst in RNN one. • The DARTS policy is very sensitive to random initialization, and yields the largest standard deviation across the 10 runs (2.54 in RNN and 0.23 in CNN space).
Such a comparison of search policies would not have been possible without our framework. Nevertheless, the above analysis does not suffice to identify the reason behind these surprising observations. As mentioned before, one reason could be that the search space has been sufficiently constrained so that all architectures perform similarly well. By contrast, if we assume that the search space does contain significantly better architectures, then we can conclude that these search algorithms truly fail to find a good one. To answer this question, we evaluate these methods in a reduced search space, where we can obtain the true performance of all possible architectures.
4.2 SEARCHING A REDUCED SPACE
The results in the previous section highlight the inability of the studied methods to surpass random search. Encouraged by these surprising results, we then dig deeper into their causes. Below, we make use of search spaces with fewer nodes, which we can explore exhaustively.
Reduced RNN space. We use the same search space as in Section 3.2 but reduce the number of intermediate nodes to 2. In Table 3 (A), we provide the results of searching the RNN 2-node space. Its smaller size allows us to exhaustively compute the results of all possible solutions, thus determining the upper bound for
this case. In Figure 3, we plot the rank of the top 1 architecture discovered by the three NAS algorithms for each of the 10 different runs.
We observe that: (i) All policies failed to find the architecture that actually performs best; (ii) The ENAS policy always converged to the same architecture. This further evidences the robustness of ENAS to the random seed; (iii) NAO performs better than random sampling on average because it keeps a ranking of architectures; (iv) DARTS never discovered a top-5 architecture.
Reduced CNN space. In Table 3 (B), we report the mean and best test top-1 accuracy over 10 different runs on the NASBench-101 7-node space. To assess the search performance, we also show the best architecture rank in the entire space. The best test accuracy found by these methods is 93.33, by NAO, which remains much lower than the ground-truth best of 95.06. In terms of ranking, the best rank of these methods across 10 runs is 19522, which is among the top 4% architectures and yields a probability of 0.62 to surpass a randomly-sampled one given the same search budget. Note that ENAS and DARTS only have 7% and 24% chance to surpass the random policy. See Appendix A.2 for the definition of this probability, and Appendix D.3 for detailed results.
NAO seems to constantly outperform random search in the reduced space. Nevertheless, the final architecture chosen by NAO is always one of the architectures from the initial pool, which were sampled uniformly randomly. This indicates that the ranking of NAO is not correctly updated throughout the search and that, in practice, in a reduced space, NAO is similar to random search.
4.3 IMPACT OF WEIGHT SHARING
Our previous experiments in reduced search spaces highlight that the ranking of the searched architectures does not reflect the ground-truth one. As we will show below, this can be traced back to weight sharing, which all the tested algorithms, and the vast majority of existing ones, rely on. To evidence this, we perform the following experiments:
Without WS: We make use of the reduced space, where we have the architecture’s real performance.
With WS: We train the architectures in parallel, using the weight sharing strategy employed in NAO and ENAS. As DARTS does not have discrete representations of the solutions during the search, the idea of solution ranking does not apply. During training, each mini-batch is given to an architecture uniformly sampled from the search space. We repeat the process 10 times, with 10 random seeds and train the shared weights for 1000 epochs for the RNN experiments and 200 epochs for the CNN ones. Note that, this approach is equivalent to Single Path One Shot (SPOS) (Guo et al., 2019). It guarantees equal expectations of the number of times each architecture is sampled, thus overcoming the bias due to unbalanced training resulting from ineffective sampling policies.
We then compute the correlation between the architecture rankings found with WS and the ground truth (i.e., the architectures trained independently). For each of the 10 runs of the weight sharing strategy, we evaluate the Kendall Tau metric (defined in Appendix A.1) of the final rankings with respect to the real averaged ranking.
RNN Results. In Figure 4(a), we depict the architecture performance obtained without WS (sorted in ascending order of average validation perplexity), and the corresponding performance with WS. In Figure 4(b), we show the rank difference, where the best and worst were found using the Kendall Tau metric, and show a concrete rank change example in Figure 4(c).
CNN Results. We report the average Kendall tau across 10 different runs. Note that we sampled up to 200 architectures for each experiment and fully evaluated on the entire test set to use the test accuracy for ranking. The Kendall tau for search spaces from 3 to 7 nodes is, respectively, 0.441, 0.314, 0.214, 0.195. We also provide other statistics in Table 6 of Appendix D.3.
Since NAO and ENAS intrinsically disentangle the training of shared weights and sampler, to further confirm the negative effect of weight sharing, we adapt these algorithms to use the architecture’s performance in the NASBench dataset to train their sampler. Table 4 evidences that, after removing weight sharing, both ENAS and NAO consistently discover a good architecture, as indicated by a small difference between the best over 10 runs and the mean performance. More interestingly, for the 7-node case, the best cell discovered (94.11% by NAO and 94.04% by ENAS) are more than 1% higher than the best cells found with weight sharing (93.33 and 92.54, respectively, in Table 3).
Observations:
• The difference of architecture performance is not related to the use of different random seeds, as indicated by the error bars in Figure 4(a).
• WS never produces the true ranking, as evidenced by the Best case in Figure 4(b). • The behavior of the WS rankings is greatly affected by changing the seed. In particular, the
Kendall Tau for the plots in Figure 4(b) are 0.282,−0.004,−0.116 for Best, Average and Worst. • For RNNs, the Kendall Tau are close to 0, which suggests a lack of correlation between the WS
rankings and the true one. By contrast, for CNNs, the correlation is on average higher than for RNNs. This matches the observation in Section 4.1 that CNN results are generally better than RNN ones.
• In a reduced CNN space, the ranking disorder increases with the space complexity, i.e., this disorder is proportional to the amount of weight sharing.4
• If we train NAO and ENAS without weight sharing in NASBench, on average the performance is 1% higher than them with it. This further evidences that weight sharing negatively impacts the sampler, and with a good ranking, the sampler can be trained better. Furthermore, the probability to surpass random search increases from 0.62 to 0.92 for NAO and from 0.07 to 0.90 for ENAS.
4We also conduct another experiment regarding to the amount of sharing in Appendix D.1
Together with previous results, we believe that these results evidence the negative impact of weight sharing; it dramatically affects the performance of the sampled architectures, thus complicating the overall search process and leading to search policies that are no better than the random one.
5 CONCLUSION
In this paper, we have analyzed the effectiveness of the search phase of NAS algorithms via fair comparisons to random search. We have observed that, surprisingly, the search policies of state-ofthe-art NAS techniques are no better than random, and have traced the reason for this to the use of (i) a constrained search space and (ii) weight sharing, which shuffles the architecture ranking during the search, thus negatively impacting it.
In essence, our gained insights highlight two key properties of state-of-the-art NAS strategies, which had been overlooked in the past due to the single-minded focus of NAS evaluation on the results on the target tasks. We believe that this will be key to the development of novel NAS algorithms. In the future, we will aim to do so by designing relaxed weight sharing strategies.
6 ACKNOWLEDGEMENT
This work was supported in part by the Swiss National Science Foundation. We would also like to thank Rene Ranftl and Vladlen Koltun for the discussions and support.
A METRICS TO EVALUATE NAS ALGORITHMS
A.1 KENDALL TAU METRIC
As a correlation measure, we make use of the Kendall Tau (τ ) metric (Kendall, 1938): a number in the range [-1, 1] with the following properties:
• τ = −1: Maximum disagreement. One ranking is the opposite of the other. • τ = 1: Maximum agreement. The two rankings are identical. • τ close to 0: A value close to zero indicates the absence of correlation.
A.2 PROBABILITY TO SURPASS RANDOM SEARCH
As discussed in Section 3.2, the goal of NASBench is to search for a CNN cell with up to 7 nodes and 3 operations, resulting in total 423,624 architectures. Each architecture is trained 3 times with different random initialization up to 108 epochs on the CIFAR-10 training set, and evaluated on the test split. Hence, the average test accuracy of these runs can be seen as the ground-truth performances. In our experiments, we use this to rank the architectures, from 1 (highest accuracy) to 423,624. Given the best architecture’s rank r after n runs, and maximum rank rmax equals to the total number of architectures, the probability that the best architecture discovered is better than a randomly searched one given the same budget is given by
p = 1− (1− (r/rmax))n. (1) We use this as a new metric to evaluate the search phase.
B NAS SEARCH SPACE REPRESENTATION
As discussed in the main paper, our starting point is a neural search space for a neural architecture, as illustrated in Figure 5. A convolutional cell can be represented with a similar topological structures. Following common practice in NAS (Zoph & Le, 2017), a candidate architecture sampled from this space connects the input and the output nodes through a sequence of intermediary ones. Each node is connected to others and has an operation attached to it.
A way of representing this search space (Pham et al., 2018; Luo et al., 2018), depicted in Figure 5(b), is by using strings. Each character in the string indicates either the node ID that the current node is connected to, or the operation selected for the current node. Operations include the identity, sigmoid, tanh and ReLU (Nair & Hinton, 2010).
Following the alternative way introduced in (Liu et al., 2019b), we make use of a vectorized representation of these strings. More specifically, as illustrated by Figure 5(c), a node ID, resp. an operation, is encoded as a vector of probabilities over all node IDs, resp. all operations. For instance, the connection between nodes i and j is represented as y(i,j)(x) = ∑ o∈O poo(x), with O the set of all
operations, and po = softmax(αo) = exp(αo)/ ∑ o′∈O exp(αo′) the probability of each operation.
C NAS ALGORITHMS
Here, we discuss the three state-of-the-art NAS algorithms used in our experiments in detail, including their hyper-parameters during the search phase. The current state-of-the-arts NAS on CIFAR-10 is ProxylessNAS (Cai et al., 2018b) with a top-1 accuracy of 97.92. However, this algorithm inherits the sampler from ENAS and DARTS, but with a different objective function, backbone model, and search space. In addition, the code is not publicly available, which precludes us from directly evaluating it.
C.1 ENAS
adopts a reinforcement learning sampling strategy that is updated with the REINFORCE algorithm. The sampler is implemented as a two-layer LSTM Hochreiter & Schmidhuber (1997) and generates a sequence of strings. In the training process, each candidate sampled by the ENAS controller is trained on an individual mini-batch. At the end of each epoch, the controller samples new architectures
that are evaluated on a single batch of the validation dataset. After this, the controller is updated accordingly using these validation metrics. We refer the reader to (Pham et al., 2018) for details about the hyper-parameter settings.
C.2 DARTS
It vectorizes the aforementioned strings as discussed in Section B and shown in Fig. 5(c). The sampling process is then parameterized by the vector α, which is optimized via gradient-descent in a dual optimization scheme: The architecture is first trained while fixing α, and α is then updated while the network is fixed. This process is repeated in an alternating manner. In the evaluation phase, DARTS samples the top-performing architecture by using the trained α vector as probability prior, i.e., the final model is not a soft average of all paths but one path in the DAG, which makes its evaluation identical to that of the other NAS algorithms. Note that we use the same hyper-parameters as in the released code of Liu et al. (2019b).
C.3 NAO
It implements a gradient-descent algorithm, but instead of vectorizing the strings as in DARTS, it makes use of a variational auto-encoder (VAE) to learn a latent representation of the candidate architectures. Furthermore, it uses a performance predictor, which takes a latent vector as input to predict the corresponding architecture performance. In short, the search phase of NAO consists of first randomly sampling an initial pool of architectures and training them so as to obtain a ranking. This ranking is then used to train the encoder-predictor-decoder network, from which new candidates are sampled, and the process is repeated in an iterative manner. The best architecture is then taken as the top-1 in the NAO ranking. We directly use the code released by Luo et al. (2018).
C.4 BAYESNAS
Bayesian optimization was first introduced to the neural architecture search field by Kandasamy et al. (2018) and Jin et al. (2019). We chose to evaluate BayesNAS (Zhou et al., 2019) because it is more recent than Auto-Keras (Jin et al., 2019) and than the work of Kandasamy et al. (2018), and because these two works use different search spaces than DARTS, resulting in models with significantly worse performance than DARTS. BayesNAS adopts Bayesian optimization to prune the fully-connected DAG graph using the shared weights to obtain accuracy metrics. The search space follows that of DARTS (Liu et al., 2019b) with minor modifications in connections, but exactly the same operations. Please see (Zhou et al., 2019) for more details. Note that BayesNAS was only implemented in CNN
space. We use the search and model code released by Zhou et al. (2019) with our training pipeline, since the authors did not release the training code.
C.5 EXPERIMENTAL SETUP
Following common practice in NAS, we make use of the word-level language modeling Penn Tree Bank (PTB) dataset (Marcus et al., 1994b) and of the image classification CIFAR-10 dataset (Krizhevsky et al., 2009). For these datasets, the goals are, respectively, finding a recurrent cell that correctly predicts the next word given the input sequence, and finding a convolutional cell that maximizes the classification accuracy. The quality of a candidate is then evaluated using the perplexity metric and top-1 accuracy, respectively.
In the evaluation phase, we always use the same model backbone and parameter initialization for all searched architectures, which ensures fairness and reflects the empirical observation that the searched models are insensitive (accuracy variations of less than 0.002 (Liu et al., 2019b)) to initialization during evaluation. For our RNN comparisons, we follow the procedure used in (Liu et al., 2019b; Pham et al., 2018; Luo et al., 2018) for the final evaluation, consisting of keeping the connections found for the best architecture in the search phase but increasing the hidden state size (to 850 in practice), so as to increase capacity. Furthermore, when training an RNN architecture from scratch, we follow (Yang et al., 2017; Merity et al., 2017) and first make the use of standard SGD to speed up training, and then change to average SGD to improve convergence. For all CNN architectures, we use RMSProp for fast optimization (Ying et al., 2019) and enable auxiliary head and cut-out (DeVries & Taylor, 2017) to boost the performance as in Liu et al. (2019b).
C.6 ADAPTATION TO REDUCED SEARCH SPACE
When changing to reduced search spaces, we adapted the evaluated search algorithms to achieve the best performance. Below, we describe these modifications.
RNN reduced space
• For DARTS, no changes are needed except modifying the number of nodes in the search space. • For NAO, to mimic the behavior of the algorithm in the space of 12 nodes, we randomly sample
20% of the possible architectures to define the initial candidate pool. We train the encoderpredictor-decoder network for 250 iterations every 50 epochs using the top-4 architectures in the NAO ranking. At each search iteration, we sample at most 3 new architectures to be added to the pool. The rest of the search logic remains unchanged. • For ENAS, we reduce the number of architectures sampled in one epoch to 20 and increase the number of batches to 10 for each architecture. All other hyper-parameters are unchanged.
CNN reduced space
• For DARTS, again, no changes are needed except modifying the number of nodes in the search space. • For NAO, since the topology of the NASBench space is very similar to the original search space, we kept most of the parameters unchanged, but only change the embedding size of the encoder proportionally to the number of nodes (12 × node - 12). • For ENAS, we set the LSTM sampler size to 64 and keep the temperature as 5.0. The number of aggregation step of each sampler training is set to 10.
D SUPPLEMENTARY EXPERIMENTS
We provide additional experiments to support our claims.
D.1 INFLUENCE OF THE AMOUNT OF SHARING
Depending on the active connections in the DAG, different architectures are subject to different amounts of weight sharing. In Figure 6 (a), let us consider the 3-node case, with node 1 and node 2
fixed and node 3 having node 1 as incoming node. In this scenario, the input to node 3 can be either directly node 0 (i.e., the input), or node 1, or node 2. In the first case, the only network parameters that the output of node 3 depends on are the weights of its own operation. In the second and third cases, however, the output further depends on the parameters of node 1, and of nodes 1 and 2, respectively.
To study the influence of the amount of sharing on the architecture ranking, we performed an experiment where we fixed the first two nodes and only searched for the third one. This represents a space of 12 architectures (3 possible connections to node 3 × 4 operations). We train them using the same setting in Section 4.3. The ranking of the 12 architectures is shown in Figure 6 (b), where color indicates the number of shared weight matrices, that is, matrices of nodes 1 and 2 also used in the search for node 3. Note that the top-performing architectures do not share any weights and that the more weights are shared, the worse the architecture performs.
In CNN space, we conduct a similar experiment in NASBench. With total node equals to 6, we only permute the last node operation and connection to one of the previous nodes. In short, we will have a total 4 connection possibility and 3 operation choices, in total 12 architectures. We compute the Kendall Tau among the architectures with the same con-
nection but different operations, and the results are reported in Table 5. Clearly, the correlation of architectures decrease while the weight sharing matrices increase.
D.2 RANDOM SAMPLING COMPARISON
As discussed before, the random policy in (Liu et al., 2019b) samples 8 architectures, and picks the best after training them for 300 epochs independently. It might seem contradictory that DARTS outperforms this random policy, but cannot surpass the much simpler one designed in our paper, which only randomly samples 10 architectures (1 per random seed), trains them to convergence and picks the best. However, the random policy in DARTS relies on the assumption that a model that performs well in the early training stage will remain effective until the end of training. While this may sound intuitive, we observed a different picture with our reduced search space.
Since we obtained the ground-truth performance ranking, as discussed in Section 4.2 of the main paper, in Figure 7, we plot the evolution of models’ rank while training proceeds, based on the average validation perplexity over 10 runs. Clearly, there are significant variations during training: Good models in early stages drop lower in the ranking towards the end. As such, there is a non-negligible chance that the random policy in DARTS
15
picks a model whose performance will be sub-optimal. We therefore believe that our policy that simply samples one model and trains it until convergence yields a more fair baseline. Furthermore, the fact that we perform our comparison using 10 random seeds, for both our approach and the NAS algorithms, vs a single one in (Liu et al., 2019b) makes our conclusions more reliable.
D.3 NASBENCH DETAILED RESULTS.
We provide additional evaluations on the NASBench dataset to benchmark the performance of the state-of-the-art NAS algorithms. In addition to the three methods in the main paper, we reimplemented some recent algorithms, such as FBNet (Wu et al., 2018), Single Path One Shot (SPOS) (Guo et al., 2019), and FairNAS (Chu et al., 2019). Note that we removed the FBNet device look-up table and model latency from the objective function since the search for a mobile model is not our primary goal. This also makes it comparable with the other baselines.
To ensure fairness, after the search phase is completed, each method trains the top 1 architectures found by its policy from scratch to obtain ground-truth performance; we repeated all the experiments with 10 random seeds. We report the mean and best top 1 accuracy in Table 6 for a number of nodes n ∈ [4, 7], and the Kendall Tau (K-T) values for one-shot methods following Section 4.2 in the paper. From the results, we observe that: 1) Sampling-based NAS strategies always have better mean accuracy with lower standard deviation, meaning that they converge to a local minimum more easily but do not exploit the entire search space. 2) By contrast, one-shot methods explore more diverse solutions, thus having larger standard deviations but lower means, but are able to pick a better architecture than sampling-based strategies (94.47 for FairNAS and 94.24 for SPOS, vs best of sampler based FBNet 93.98). 3) ENAS constantly improves as the number of nodes increases. 4) FBNet constantly outperforms DARTS, considering the similarity, using Gumbel Softmax seems a better choice. 5) The variance of these algorithms is large and sensitive to initialization. 6) Even one-shot algorithms cannot find the overall best architecture with accuracy 95.06. | 1. What are the strengths and weaknesses of the paper regarding its contribution to evaluating search strategies for neural architecture search?
2. How does the reviewer assess the significance of the issues pointed out by the paper regarding the current evaluation scheme?
3. Do you think the analysis and experiments conducted in the paper adequately support its conclusions? If not, what further experiments or discussions do you suggest?
4. How does the reviewer evaluate the impact of the paper's findings on future research in neural architecture search?
5. Are there any concerns about the comparisons made between different methods in the paper, particularly with recent works in NAS that evaluate under multiple random seeds and perform fair comparisons with random search baselines? | Review | Review
This works studies the evaluation of search strategies for neural architecture search. It points out existing problems of the current evaluation scheme: (1) only compares the final result without testing the robustness under different random seeds; (2) lacking fair comparison with random baseline under different random seeds. The authors analyzed three popular NAS methods with weight sharing (ENAS, DARTS, NAO), and showed that they don't significantly improve upon random baseline on PTB and CIFAR-10. On a reduced search space of RNN and CNN (NASBench), they showed that the three methods fail to find the best performing architecture. Then they compared search with and without weight sharing and showed the correlation between architecture performance under the two conditions in a reduced search space, which indicates the weight sharing is a potential cause for the suboptimal performance.
I recommend acceptance of the paper for the reasons below.
(1) It pointed out some important issues in the evaluation of NAS methods: evaluating under different random seeds and fair comparison with random baseline.
(2) The analysis is supported by experiments in the original search space and a reduced search space, which makes the result more convincing.
(3) It proposed the weight sharing as a potential cause and supported the hypothesis with experiments in the reduced search space, although more experiments in a realistic search space are needed to make the conclusion more solid.
Weakness:
(1) The problem that the search space is over-optimized and constrained is not unnoticed before. For example, table 1 in (Liu et al, 2018) showed that the random search baseline performs not much worse than the DARTS (~0.53% difference), which is similar to the conclusions on CIFAR-10 presented in this work.
(2) More recent works in NAS is already evaluating under multiple random seeds and performing fair comparison with random search baselines, for example, (So et al, 2019). There should be more discussions about such improvements in the rigorous evaluation of NAS.
(3) The comparison between with and without weight sharing in section 4.3 is interesting, but there should be more support in a realistic search space, because the landscape could be very different. Otherwise, it is better to make clear the scope of the conclusion, for example, instead of "in CNN space, the ranking disorder ...", it is better to use "in a reduced CNN space, ...".
"Darts: Differentiable architecture search." Liu, Hanxiao, Karen Simonyan, and Yiming Yang. ICLR, 2019
"The Evolved Transformer." David R. So, Chen Liang, and Quoc V. Le., International Conference on Machine Learning. 2019.
Typos:
"based one their results on the downstream task." -> "based on"
"obtained an an accuracy" -> "obtained an accuracy"
====================================
I have read the author response and would keep the same rating. The paper pointed out an important issue, but it has also been noticed before. The insight on weight sharing is interesting, although more experiments are needed to testify the claim over state-of-the-art NAS search space. |
ICLR | Title
Evaluating The Search Phase of Neural Architecture Search
Abstract
Neural Architecture Search (NAS) aims to facilitate the design of deep networks for new tasks. Existing techniques rely on two stages: searching over the architecture space and validating the best architecture. NAS algorithms are currently compared solely based on their results on the downstream task. While intuitive, this fails to explicitly evaluate the effectiveness of their search strategies. In this paper, we propose to evaluate the NAS search phase. To this end, we compare the quality of the solutions obtained by NAS search policies with that of random architecture selection. We find that: (i) On average, the state-of-the-art NAS algorithms perform similarly to the random policy; (ii) the widely-used weight sharing strategy degrades the ranking of the NAS candidates to the point of not reflecting their true performance, thus reducing the effectiveness of the search process. We believe that our evaluation framework will be key to designing NAS strategies that consistently discover architectures superior to random ones.
1 INTRODUCTION
By automating the design of a neural network for the task at hand, Neural Architecture Search (NAS) has tremendous potential to impact the practicality of deep learning (Zoph & Le, 2017; Liu et al., 2018b;a; Tan et al., 2018; Baker et al., 2016), and has already obtained state-of-the-art performance on many tasks. A typical NAS technique (Zoph & Le, 2017; Pham et al., 2018; Liu et al., 2018a) has two stages: the search phase, which aims to find a good architecture, and the evaluation one, where the best architecture is trained from scratch and validated on the test data.
In the literature, NAS algorithms are typically compared based on their results in the evaluation phase. While this may seem intuitive, the search phase of these algorithms often differ in several ways, such as their architecture sampling strategy and the search space they use, and the impact of these individual factors cannot be identified by looking at the downstream task results only. Furthermore, the downstream task results are often reported for a single random seed, which leaves unanswered the question of robustness of the search strategies. ∗Equal contribution †Work done at Swisscom digital lab.
In this paper, we therefore propose to investigate the search phase of existing NAS algorithms in a controlled manner. To this end, we compare the quality of the NAS solutions with a random search policy, which uniformly randomly samples an architecture from the same search space as the NAS algorithms, and then trains it using the same hyper-parameters as the NAS solutions. To reduce randomness, the search using each policy, i.e., random and NAS ones, is repeated several times, with different random seeds.
We perform a series of experiments on the Penn Tree Bank (PTB) (Marcus et al., 1994a) and CIFAR10 (Krizhevsky et al., 2009) datasets, in which we compared the state-of-the-art NAS algorithms whose code is publicly available—DARTS (Liu et al., 2019b), NAO (Luo et al., 2018) and ENAS (Pham et al., 2018)—to our random policy. We reached the surprising conclusions that, as shown in Table 1, none of them significantly outperforms random sampling. Since the mean performance for randomlysampled architectures converges to the mean performance over the entire search space, we further conducted Welch Student’s t-tests (Welch, 1947), which reveal that, in RNN space, ENAS and DARTS cannot be differentiated from the mean of entire search space, while NAO yields worse performance than random sampling. While the situation is slightly better in CNN space, all three algorithms still perform similarly to random sampling. Note that this does not necessarily mean that these algorithms perform poorly, but rather that the search space has been sufficiently constrained so that even a random architecture in this space provides good results. To verify this, we experiment with search spaces where we can exhaustively evaluate all architectures, and observe that these algorithms truly cannot discover top-performing architectures.
In addition to this, we observed that the ranking by quality of candidate architectures produced by the NAS algorithms during the search does not reflect the true performance of these architectures in the evaluation phase. Investigating this further allowed us to identify that weight sharing (Pham et al., 2018), widely adopted to reduce the amount of required resources from thousands of GPU days to a single one, harms the individual networks’ performance. More precisely, using reduced search spaces, we make use of the Kendall Tau τ metric1 to show that the architecture rankings obtained with and without weight sharing are entirely uncorrelated in RNN space (τ = -0.004 over 10 runs); and have little correlation in the CNN space (τ = 0.195 over 10 runs). Since such a ranking is usually treated as training data for the NAS sampler in the search phase, this further explains the small margin between random search and the NAS algorithms. We also show that training samplers without weight sharing in CNN space surpasses random sampling by a significant margin.
In other words, we disprove the common belief that the quality of architectures trained with and without weight sharing is similar. We show that the difference in ranking negatively impacts the search phase of NAS algorithms, thus seriously impeding their robustness and performance.
In short, evaluating the search phase of NAS, which is typically ignored, allowed us to identify two key characteristics of state-of-the-art NAS algorithms: The importance of the search space and the negative impact of weight sharing. We believe that our evaluation framework will be instrumental in designing NAS search strategies that are superior to the random one. Our code is publicly available at https://github.com/kcyu2014/eval-nas.
2 RELATED WORK
Since its introduction in (Zoph & Le, 2017), NAS has demonstrated great potential to surpass the human design of deep networks for both visual recognition (Liu et al., 2018b; Ahmed & Torresani, 2018; Chen et al., 2018; Pérez-Rúa et al., 2018; Liu et al., 2019a) and natural language processing (Zoph & Le, 2017; Pham et al., 2018; Luo et al., 2018; Zoph et al., 2018; Liu et al., 2018b; Cai et al., 2018a). Existing search strategies include reinforcement learning (RL) samplers (Zoph & Le, 2017; Zoph et al., 2018; Pham et al., 2018), evolutionary algorithms (Xie & Yuille, 2017; Real et al., 2017; Miikkulainen et al., 2019; Liu et al., 2018b; Lu et al., 2018), gradient-descent (Liu et al., 2019b), bayesian optimization (Kandasamy et al., 2018; Jin et al., 2019; Zhou et al., 2019) and performance predictors (Liu et al., 2018a; Luo et al., 2018). Here, our goal is not to introduce a new search policy, but rather to provide the means to analyze existing ones. Below, we briefly discuss existing NAS methods and focus on how they are typically evaluated.
Neural architecture search with weight sharing. The potential of vanilla NAS comes with the drawback of requiring thousands of GPU hours even for small datasets, such as PTB and CIFAR-10.
1The Kendall Tau (Kendall, 1938) metric measures the correlation of two ranking. Details in Appendix A.1.
Furthermore, even when using such heavy computational resources, vanilla NAS has to restrict the number of trained architectures from a total of 109 to 104, and increasing the sampler accuracy can only be achieved by increasing the resources.
ENAS (Pham et al., 2018) was the first to propose a training scheme with shared parameters, reducing the resources from thousands of GPU days to one. Instead of being trained from scratch each sampled model inherits the parameters from previously-trained ones. Since then, NAS research has mainly focused on two directions: 1) Replacing the RL sampler with a better search algorithm, such as gradient descent (Liu et al., 2019b), bayesian optimiziation (Zhou et al., 2019) and performance predictors (Luo et al., 2018); 2) Exploiting NAS for other applications, e.g., object detection (Ghiasi et al., 2019; Chen et al., 2019), semantic segmentation (Liu et al., 2019a), and finding compact networks (Cai et al., 2018b; Wu et al., 2018; Chu et al., 2019; Guo et al., 2019).
Characterizing the search space. Ying et al. (2019); Dong & Yang (2020) introduced a dataset that contains the ground-truth performance of CNN cells, and Wang et al. (2019) evaluated some traditional search algorithms on it. Similarly, Radosavovic et al. (2019) characterizes many CNN search spaces by computing the statistics of a set of sampled architectures, revealing that, for datasets such as CIFAR-10 or ImageNet, these statistics are similar. While these works support our claim that evaluation of NAS algorithms is crucial, they do not directly evaluate the state-of-the-arts NAS algorithms as we do here.
Evaluation of NAS algorithms. Typically, the quality of NAS algorithms is judged based on the results of the final architecture they produce on the downstream task. In other words, the search and robustness of these algorithms are generally not studied, with (Liu et al., 2019b; So et al., 2019) the only exception for robustness, where results obtained with different random seeds were reported. Here, we aim to further the understanding of the mechanisms behind the search phase of NAS algorithms. Specifically, we propose doing so by comparing them with a simple random search policy, which uniformly randomly samples one architecture per run in the same search space as the NAS techniques.
While some works have provided partial comparisons to random search, these comparisons unfortunately did not give a fair chance to the random policy. Specifically, (Pham et al., 2018) reports the results of only a single random architecture, and (Liu et al., 2018b) those of an architecture selected among 8 randomly sampled ones as the most promising one after training for 300 epochs only. Here, we show that a fair comparison to the random policy, obtained by training all architectures, i.e., random and NAS ones, for 1000 epochs and averaging over multiple random seeds for robustness, yields a different picture; the state-of-the-art search policies are no better than the random one.
The motivation behind this comparison was our observation of only a weak correlation between the performance of the searched architectures and the ones trained from scratch during the evaluation phase. This phenomenon was already noticed by Zela et al. (2018), and concurrently to our work by Li & Talwalkar (2019); Xie et al. (2019); Ying et al. (2019), but the analysis of its impact or its causes went no further. Here, by contrast, we link this difference in performance between the search and evaluation phases to the use of weight sharing.
While this may seem to contradict the findings of Bender et al. (2018), which, on CIFAR-10, observed a strong correlation between architectures trained with and without weight sharing when searching a CNN cell, our work differs from (Bender et al., 2018) in two fundamental ways: 1) The training scheme in (Bender et al., 2018), in which the entire model with shared parameters is trained via random path dropping, is fundamentally different from those used by state-of-the-arts weight sharing NAS strategies (Pham et al., 2018; Liu et al., 2019b; Luo et al., 2018); 2) While the correlation in (Bender et al., 2018) was approximated using a small subset of sampled architectures, we make use of a reduced search space where we can perform a complete evaluation of all architectures, thus providing an exact correlation measure in this space.
3 EVALUATING THE NAS SEARCH
In this section, we detail our evaluation framework for the NAS search phase. As depicted in Fig. 1(a,b), typical NAS algorithms consist of two phases:
• Search: The goal of this phase is to find the best candidate architecture from the search space2. This is where existing algorithms, such as ENAS, DARTS and NAO, differ. Nevertheless, for all the algorithms, the search depends heavily on initialization. In all the studied policies, initialization is random and the outcome thus depends on the chosen random seed.
• Evaluation: In this phase, all the studied algorithms retrain the best model found in the search phase. The retrained model is then evaluated on the test data.
The standard evaluation of NAS techniques focuses solely on the final results on the test data. Here, by contrast, we aim to evaluate the search phase itself, which truly differentiates existing algorithms.
To do this, as illustrated in Fig. 1(c), we establish a baseline; we compare the search phase of existing algorithms with a random search policy. An effective search algorithm should yield a solution that clearly outperforms the random policy. Below, we introduce our framework to compare NAS search algorithms with random search. The three NAS algorithms that we evaluated, DARTS (Liu et al., 2019b), NAO (Luo et al., 2018) and ENAS (Pham et al., 2018), are representative of the state of the art for different search algorithms: reinforcement learning, gradient-descent and performance prediction, and are discussed in Appendix C.
3.1 COMPARING TO RANDOM SEARCH
We implement our random search policy by simply assigning uniform probabilities to all operations. Then, for each node in the Directed Acyclic Graph (DAG) that is typically used to represent an architecture, we randomly sample a connection to one previous node from the resulting distributions.
An effective search policy should outperform the random one. To evaluate this, we compute the validation results of the best architecture found by the NAS algorithm trained from scratch, as well as those of a single randomly sampled architecture. Comparing these values for a single random seed would of course not provide a reliable measure. Therefore, we repeat this process for multiple random seeds used both during the search phase of the NAS algorithm and to sample one random architecture as described above. We then report the means and standard deviations of these results over the different seeds. Note that while we use different seeds for the search and random sampling, we always use the same seed when training the models from scratch during the evaluation phase.
Our use of multiple random seeds and of the same number of epochs for the NAS algorithms and for our random search policy makes the comparison fair. This contrasts with the comparisons performed in (Pham et al., 2018), where the results of only a single random architecture were reported, and
2Details about search spaces are provided in Appendix B.
in (Liu et al., 2019b), which selected a single best random architecture among an initial set of 8 after training for 300 epochs only. As shown in Appendix D.2, some models that perform well in the early training stages may yield worse performance than others after convergence. Therefore, choosing the best random architecture after only 300 epochs for PTB and 100 for CIFAR-10, and doing so for a single random seed, might not be representative of the general behavior.
3.2 SEARCH IN A REDUCED SPACE
Because of the size of standard search spaces, one cannot understand the quality of the search by fully evaluating all possible solutions. Hence, we propose to make use of reduced search spaces with ground-truth architecture performances available to evaluate the search quality. For RNNs, we simply reduce the number of nodes in the search space from 12 to 2. Given that each node is identified by two values, the ID of the incoming node and the activation function, the space has a cardinality |S| = n! ∗ |O|n, where n = 2 nodes and |O| = 4 operations, thus yielding 32 possible solutions. To obtain ground truth, we train all of these architectures individually. Each architecture is trained 10 times with a different seed, which therefore yields a mean and standard deviation of its performance. The mean value is used as ground truth—the actual potential of the given architecture. These experiments took around 5000 GPU hours.
For CNNs, we make use of NASBench-101 (Ying et al., 2019), a CNN graph-based search space with 3 possible operations, conv3x3, conv1x1 and max3x3. This framework defines search spaces with between 3 and 7 nodes, with 423,624 architectures in 7-node case. To the best of our knowledge, we are the first to evaluate the NAS methods used in this paper on NASBench.
4 EXPERIMENTAL RESULTS
To analyze the search phase of the three state-of-the-art NAS algorithms mentioned above, we first compare these algorithms to our random policy when using standard search spaces for RNNs on (PTB) and CNNs on CIFAR-10. Details about the experiment setting are in Appendix C.5. The surprising findings in this typical NAS use case prompted us to study the behavior of the search strategies in reduced search spaces. This allowed us to identify a factor that has a significant impact on the observed results: Weight sharing. We then quantify this impact on the ranking of the NAS candidates, evidencing that it dramatically affects the effectiveness of the search.
4.1 NAS COMPARISON IN A STANDARD SEARCH SPACE
Below, we compare DARTS (Liu et al., 2019b), NAO (Luo et al., 2018), ENAS (Pham et al., 2018) and BayesNAS (Zhou et al., 2019) with our random search policy, as discussed in Section 3.1. We follow (Liu et al., 2019b) to define an RNN search space of 12 nodes and a CNN ones of 7 nodes. For each of the four search policies, we run 10 experiments with a different initialization of the sampling policy. During the search phase, we used the authors-provided hyper-parameters and code for each policy. Once a best architecture is identified by the search phase, it is used for evaluation, i.e., we train the chosen architecture from scratch for 1000 epochs for RNN and 600 for CNN.
RNN Results. In Figure 2, we plot, on the left, the mean perplexity evolution over the 1000 epochs, obtained by averaging the results of the best architectures found using the 10 consecutive seeds.3 On the right, we show the perplexity evolution for the best cell of each strategy among the 10 different runs. Random sampling is robust and consistently competitive. As shown in Table 1, it outperforms on average the DARTS and NAO policies, and yields the overall best cell for these experiments with perplexity
3Starting from 1268, which is right after 1267, the seed released by Liu et al. (2019b). Note that, using this seed, we can reproduce the DARTS RNN search and obtain a validation PPL of 55.7 as in (Liu et al., 2019b).
of 57.60. Further training this cell for 4000 epochs, as in (Liu et al., 2019b), yields a perplexity of 55.93. The excellent performance of the random policy evidences the high expressiveness of the manually-constructed search space; even arbitrary policies in this space perform well, as evidenced by the relatively low standard deviation over the 10 seeds of the random architectures, shown in Table 1 and Figure 2(left).
CNN Results. In Table 2, we compare the NAS methods with our random policy in the search space of Liu et al. (2019b). We provide the accuracy reported in the original papers as well as the accuracy we reproduced using our implementation. Note that the NAS algorithms only marginally outperform random search, by less than 0.5% in top-1 accuracy. The best architecture was discovered by NAO, with an accuracy of 97.10%, again less than 0.5% higher than the randomly discovered one. Note that, our random sampling comes at no search cost. By contrast, Li & Talwalkar (2019) obtained an accuracy of 97.15% with a different random search policy having the same cost as DARTS.
Observations:
• The evaluated state-of-the-art NAS algorithms do not surpass random search by a significant margin, and even perform worse in the RNN search space. • The ENAS policy sampler has the lowest variance among the three tested ones. This shows that ENAS is more robust to the variance caused by the random seed of the search phase. • The NAO policy is more sensitive to the search space; while it yields the best performance in CNN space, it performs the worst in RNN one. • The DARTS policy is very sensitive to random initialization, and yields the largest standard deviation across the 10 runs (2.54 in RNN and 0.23 in CNN space).
Such a comparison of search policies would not have been possible without our framework. Nevertheless, the above analysis does not suffice to identify the reason behind these surprising observations. As mentioned before, one reason could be that the search space has been sufficiently constrained so that all architectures perform similarly well. By contrast, if we assume that the search space does contain significantly better architectures, then we can conclude that these search algorithms truly fail to find a good one. To answer this question, we evaluate these methods in a reduced search space, where we can obtain the true performance of all possible architectures.
4.2 SEARCHING A REDUCED SPACE
The results in the previous section highlight the inability of the studied methods to surpass random search. Encouraged by these surprising results, we then dig deeper into their causes. Below, we make use of search spaces with fewer nodes, which we can explore exhaustively.
Reduced RNN space. We use the same search space as in Section 3.2 but reduce the number of intermediate nodes to 2. In Table 3 (A), we provide the results of searching the RNN 2-node space. Its smaller size allows us to exhaustively compute the results of all possible solutions, thus determining the upper bound for
this case. In Figure 3, we plot the rank of the top 1 architecture discovered by the three NAS algorithms for each of the 10 different runs.
We observe that: (i) All policies failed to find the architecture that actually performs best; (ii) The ENAS policy always converged to the same architecture. This further evidences the robustness of ENAS to the random seed; (iii) NAO performs better than random sampling on average because it keeps a ranking of architectures; (iv) DARTS never discovered a top-5 architecture.
Reduced CNN space. In Table 3 (B), we report the mean and best test top-1 accuracy over 10 different runs on the NASBench-101 7-node space. To assess the search performance, we also show the best architecture rank in the entire space. The best test accuracy found by these methods is 93.33, by NAO, which remains much lower than the ground-truth best of 95.06. In terms of ranking, the best rank of these methods across 10 runs is 19522, which is among the top 4% architectures and yields a probability of 0.62 to surpass a randomly-sampled one given the same search budget. Note that ENAS and DARTS only have 7% and 24% chance to surpass the random policy. See Appendix A.2 for the definition of this probability, and Appendix D.3 for detailed results.
NAO seems to constantly outperform random search in the reduced space. Nevertheless, the final architecture chosen by NAO is always one of the architectures from the initial pool, which were sampled uniformly randomly. This indicates that the ranking of NAO is not correctly updated throughout the search and that, in practice, in a reduced space, NAO is similar to random search.
4.3 IMPACT OF WEIGHT SHARING
Our previous experiments in reduced search spaces highlight that the ranking of the searched architectures does not reflect the ground-truth one. As we will show below, this can be traced back to weight sharing, which all the tested algorithms, and the vast majority of existing ones, rely on. To evidence this, we perform the following experiments:
Without WS: We make use of the reduced space, where we have the architecture’s real performance.
With WS: We train the architectures in parallel, using the weight sharing strategy employed in NAO and ENAS. As DARTS does not have discrete representations of the solutions during the search, the idea of solution ranking does not apply. During training, each mini-batch is given to an architecture uniformly sampled from the search space. We repeat the process 10 times, with 10 random seeds and train the shared weights for 1000 epochs for the RNN experiments and 200 epochs for the CNN ones. Note that, this approach is equivalent to Single Path One Shot (SPOS) (Guo et al., 2019). It guarantees equal expectations of the number of times each architecture is sampled, thus overcoming the bias due to unbalanced training resulting from ineffective sampling policies.
We then compute the correlation between the architecture rankings found with WS and the ground truth (i.e., the architectures trained independently). For each of the 10 runs of the weight sharing strategy, we evaluate the Kendall Tau metric (defined in Appendix A.1) of the final rankings with respect to the real averaged ranking.
RNN Results. In Figure 4(a), we depict the architecture performance obtained without WS (sorted in ascending order of average validation perplexity), and the corresponding performance with WS. In Figure 4(b), we show the rank difference, where the best and worst were found using the Kendall Tau metric, and show a concrete rank change example in Figure 4(c).
CNN Results. We report the average Kendall tau across 10 different runs. Note that we sampled up to 200 architectures for each experiment and fully evaluated on the entire test set to use the test accuracy for ranking. The Kendall tau for search spaces from 3 to 7 nodes is, respectively, 0.441, 0.314, 0.214, 0.195. We also provide other statistics in Table 6 of Appendix D.3.
Since NAO and ENAS intrinsically disentangle the training of shared weights and sampler, to further confirm the negative effect of weight sharing, we adapt these algorithms to use the architecture’s performance in the NASBench dataset to train their sampler. Table 4 evidences that, after removing weight sharing, both ENAS and NAO consistently discover a good architecture, as indicated by a small difference between the best over 10 runs and the mean performance. More interestingly, for the 7-node case, the best cell discovered (94.11% by NAO and 94.04% by ENAS) are more than 1% higher than the best cells found with weight sharing (93.33 and 92.54, respectively, in Table 3).
Observations:
• The difference of architecture performance is not related to the use of different random seeds, as indicated by the error bars in Figure 4(a).
• WS never produces the true ranking, as evidenced by the Best case in Figure 4(b). • The behavior of the WS rankings is greatly affected by changing the seed. In particular, the
Kendall Tau for the plots in Figure 4(b) are 0.282,−0.004,−0.116 for Best, Average and Worst. • For RNNs, the Kendall Tau are close to 0, which suggests a lack of correlation between the WS
rankings and the true one. By contrast, for CNNs, the correlation is on average higher than for RNNs. This matches the observation in Section 4.1 that CNN results are generally better than RNN ones.
• In a reduced CNN space, the ranking disorder increases with the space complexity, i.e., this disorder is proportional to the amount of weight sharing.4
• If we train NAO and ENAS without weight sharing in NASBench, on average the performance is 1% higher than them with it. This further evidences that weight sharing negatively impacts the sampler, and with a good ranking, the sampler can be trained better. Furthermore, the probability to surpass random search increases from 0.62 to 0.92 for NAO and from 0.07 to 0.90 for ENAS.
4We also conduct another experiment regarding to the amount of sharing in Appendix D.1
Together with previous results, we believe that these results evidence the negative impact of weight sharing; it dramatically affects the performance of the sampled architectures, thus complicating the overall search process and leading to search policies that are no better than the random one.
5 CONCLUSION
In this paper, we have analyzed the effectiveness of the search phase of NAS algorithms via fair comparisons to random search. We have observed that, surprisingly, the search policies of state-ofthe-art NAS techniques are no better than random, and have traced the reason for this to the use of (i) a constrained search space and (ii) weight sharing, which shuffles the architecture ranking during the search, thus negatively impacting it.
In essence, our gained insights highlight two key properties of state-of-the-art NAS strategies, which had been overlooked in the past due to the single-minded focus of NAS evaluation on the results on the target tasks. We believe that this will be key to the development of novel NAS algorithms. In the future, we will aim to do so by designing relaxed weight sharing strategies.
6 ACKNOWLEDGEMENT
This work was supported in part by the Swiss National Science Foundation. We would also like to thank Rene Ranftl and Vladlen Koltun for the discussions and support.
A METRICS TO EVALUATE NAS ALGORITHMS
A.1 KENDALL TAU METRIC
As a correlation measure, we make use of the Kendall Tau (τ ) metric (Kendall, 1938): a number in the range [-1, 1] with the following properties:
• τ = −1: Maximum disagreement. One ranking is the opposite of the other. • τ = 1: Maximum agreement. The two rankings are identical. • τ close to 0: A value close to zero indicates the absence of correlation.
A.2 PROBABILITY TO SURPASS RANDOM SEARCH
As discussed in Section 3.2, the goal of NASBench is to search for a CNN cell with up to 7 nodes and 3 operations, resulting in total 423,624 architectures. Each architecture is trained 3 times with different random initialization up to 108 epochs on the CIFAR-10 training set, and evaluated on the test split. Hence, the average test accuracy of these runs can be seen as the ground-truth performances. In our experiments, we use this to rank the architectures, from 1 (highest accuracy) to 423,624. Given the best architecture’s rank r after n runs, and maximum rank rmax equals to the total number of architectures, the probability that the best architecture discovered is better than a randomly searched one given the same budget is given by
p = 1− (1− (r/rmax))n. (1) We use this as a new metric to evaluate the search phase.
B NAS SEARCH SPACE REPRESENTATION
As discussed in the main paper, our starting point is a neural search space for a neural architecture, as illustrated in Figure 5. A convolutional cell can be represented with a similar topological structures. Following common practice in NAS (Zoph & Le, 2017), a candidate architecture sampled from this space connects the input and the output nodes through a sequence of intermediary ones. Each node is connected to others and has an operation attached to it.
A way of representing this search space (Pham et al., 2018; Luo et al., 2018), depicted in Figure 5(b), is by using strings. Each character in the string indicates either the node ID that the current node is connected to, or the operation selected for the current node. Operations include the identity, sigmoid, tanh and ReLU (Nair & Hinton, 2010).
Following the alternative way introduced in (Liu et al., 2019b), we make use of a vectorized representation of these strings. More specifically, as illustrated by Figure 5(c), a node ID, resp. an operation, is encoded as a vector of probabilities over all node IDs, resp. all operations. For instance, the connection between nodes i and j is represented as y(i,j)(x) = ∑ o∈O poo(x), with O the set of all
operations, and po = softmax(αo) = exp(αo)/ ∑ o′∈O exp(αo′) the probability of each operation.
C NAS ALGORITHMS
Here, we discuss the three state-of-the-art NAS algorithms used in our experiments in detail, including their hyper-parameters during the search phase. The current state-of-the-arts NAS on CIFAR-10 is ProxylessNAS (Cai et al., 2018b) with a top-1 accuracy of 97.92. However, this algorithm inherits the sampler from ENAS and DARTS, but with a different objective function, backbone model, and search space. In addition, the code is not publicly available, which precludes us from directly evaluating it.
C.1 ENAS
adopts a reinforcement learning sampling strategy that is updated with the REINFORCE algorithm. The sampler is implemented as a two-layer LSTM Hochreiter & Schmidhuber (1997) and generates a sequence of strings. In the training process, each candidate sampled by the ENAS controller is trained on an individual mini-batch. At the end of each epoch, the controller samples new architectures
that are evaluated on a single batch of the validation dataset. After this, the controller is updated accordingly using these validation metrics. We refer the reader to (Pham et al., 2018) for details about the hyper-parameter settings.
C.2 DARTS
It vectorizes the aforementioned strings as discussed in Section B and shown in Fig. 5(c). The sampling process is then parameterized by the vector α, which is optimized via gradient-descent in a dual optimization scheme: The architecture is first trained while fixing α, and α is then updated while the network is fixed. This process is repeated in an alternating manner. In the evaluation phase, DARTS samples the top-performing architecture by using the trained α vector as probability prior, i.e., the final model is not a soft average of all paths but one path in the DAG, which makes its evaluation identical to that of the other NAS algorithms. Note that we use the same hyper-parameters as in the released code of Liu et al. (2019b).
C.3 NAO
It implements a gradient-descent algorithm, but instead of vectorizing the strings as in DARTS, it makes use of a variational auto-encoder (VAE) to learn a latent representation of the candidate architectures. Furthermore, it uses a performance predictor, which takes a latent vector as input to predict the corresponding architecture performance. In short, the search phase of NAO consists of first randomly sampling an initial pool of architectures and training them so as to obtain a ranking. This ranking is then used to train the encoder-predictor-decoder network, from which new candidates are sampled, and the process is repeated in an iterative manner. The best architecture is then taken as the top-1 in the NAO ranking. We directly use the code released by Luo et al. (2018).
C.4 BAYESNAS
Bayesian optimization was first introduced to the neural architecture search field by Kandasamy et al. (2018) and Jin et al. (2019). We chose to evaluate BayesNAS (Zhou et al., 2019) because it is more recent than Auto-Keras (Jin et al., 2019) and than the work of Kandasamy et al. (2018), and because these two works use different search spaces than DARTS, resulting in models with significantly worse performance than DARTS. BayesNAS adopts Bayesian optimization to prune the fully-connected DAG graph using the shared weights to obtain accuracy metrics. The search space follows that of DARTS (Liu et al., 2019b) with minor modifications in connections, but exactly the same operations. Please see (Zhou et al., 2019) for more details. Note that BayesNAS was only implemented in CNN
space. We use the search and model code released by Zhou et al. (2019) with our training pipeline, since the authors did not release the training code.
C.5 EXPERIMENTAL SETUP
Following common practice in NAS, we make use of the word-level language modeling Penn Tree Bank (PTB) dataset (Marcus et al., 1994b) and of the image classification CIFAR-10 dataset (Krizhevsky et al., 2009). For these datasets, the goals are, respectively, finding a recurrent cell that correctly predicts the next word given the input sequence, and finding a convolutional cell that maximizes the classification accuracy. The quality of a candidate is then evaluated using the perplexity metric and top-1 accuracy, respectively.
In the evaluation phase, we always use the same model backbone and parameter initialization for all searched architectures, which ensures fairness and reflects the empirical observation that the searched models are insensitive (accuracy variations of less than 0.002 (Liu et al., 2019b)) to initialization during evaluation. For our RNN comparisons, we follow the procedure used in (Liu et al., 2019b; Pham et al., 2018; Luo et al., 2018) for the final evaluation, consisting of keeping the connections found for the best architecture in the search phase but increasing the hidden state size (to 850 in practice), so as to increase capacity. Furthermore, when training an RNN architecture from scratch, we follow (Yang et al., 2017; Merity et al., 2017) and first make the use of standard SGD to speed up training, and then change to average SGD to improve convergence. For all CNN architectures, we use RMSProp for fast optimization (Ying et al., 2019) and enable auxiliary head and cut-out (DeVries & Taylor, 2017) to boost the performance as in Liu et al. (2019b).
C.6 ADAPTATION TO REDUCED SEARCH SPACE
When changing to reduced search spaces, we adapted the evaluated search algorithms to achieve the best performance. Below, we describe these modifications.
RNN reduced space
• For DARTS, no changes are needed except modifying the number of nodes in the search space. • For NAO, to mimic the behavior of the algorithm in the space of 12 nodes, we randomly sample
20% of the possible architectures to define the initial candidate pool. We train the encoderpredictor-decoder network for 250 iterations every 50 epochs using the top-4 architectures in the NAO ranking. At each search iteration, we sample at most 3 new architectures to be added to the pool. The rest of the search logic remains unchanged. • For ENAS, we reduce the number of architectures sampled in one epoch to 20 and increase the number of batches to 10 for each architecture. All other hyper-parameters are unchanged.
CNN reduced space
• For DARTS, again, no changes are needed except modifying the number of nodes in the search space. • For NAO, since the topology of the NASBench space is very similar to the original search space, we kept most of the parameters unchanged, but only change the embedding size of the encoder proportionally to the number of nodes (12 × node - 12). • For ENAS, we set the LSTM sampler size to 64 and keep the temperature as 5.0. The number of aggregation step of each sampler training is set to 10.
D SUPPLEMENTARY EXPERIMENTS
We provide additional experiments to support our claims.
D.1 INFLUENCE OF THE AMOUNT OF SHARING
Depending on the active connections in the DAG, different architectures are subject to different amounts of weight sharing. In Figure 6 (a), let us consider the 3-node case, with node 1 and node 2
fixed and node 3 having node 1 as incoming node. In this scenario, the input to node 3 can be either directly node 0 (i.e., the input), or node 1, or node 2. In the first case, the only network parameters that the output of node 3 depends on are the weights of its own operation. In the second and third cases, however, the output further depends on the parameters of node 1, and of nodes 1 and 2, respectively.
To study the influence of the amount of sharing on the architecture ranking, we performed an experiment where we fixed the first two nodes and only searched for the third one. This represents a space of 12 architectures (3 possible connections to node 3 × 4 operations). We train them using the same setting in Section 4.3. The ranking of the 12 architectures is shown in Figure 6 (b), where color indicates the number of shared weight matrices, that is, matrices of nodes 1 and 2 also used in the search for node 3. Note that the top-performing architectures do not share any weights and that the more weights are shared, the worse the architecture performs.
In CNN space, we conduct a similar experiment in NASBench. With total node equals to 6, we only permute the last node operation and connection to one of the previous nodes. In short, we will have a total 4 connection possibility and 3 operation choices, in total 12 architectures. We compute the Kendall Tau among the architectures with the same con-
nection but different operations, and the results are reported in Table 5. Clearly, the correlation of architectures decrease while the weight sharing matrices increase.
D.2 RANDOM SAMPLING COMPARISON
As discussed before, the random policy in (Liu et al., 2019b) samples 8 architectures, and picks the best after training them for 300 epochs independently. It might seem contradictory that DARTS outperforms this random policy, but cannot surpass the much simpler one designed in our paper, which only randomly samples 10 architectures (1 per random seed), trains them to convergence and picks the best. However, the random policy in DARTS relies on the assumption that a model that performs well in the early training stage will remain effective until the end of training. While this may sound intuitive, we observed a different picture with our reduced search space.
Since we obtained the ground-truth performance ranking, as discussed in Section 4.2 of the main paper, in Figure 7, we plot the evolution of models’ rank while training proceeds, based on the average validation perplexity over 10 runs. Clearly, there are significant variations during training: Good models in early stages drop lower in the ranking towards the end. As such, there is a non-negligible chance that the random policy in DARTS
15
picks a model whose performance will be sub-optimal. We therefore believe that our policy that simply samples one model and trains it until convergence yields a more fair baseline. Furthermore, the fact that we perform our comparison using 10 random seeds, for both our approach and the NAS algorithms, vs a single one in (Liu et al., 2019b) makes our conclusions more reliable.
D.3 NASBENCH DETAILED RESULTS.
We provide additional evaluations on the NASBench dataset to benchmark the performance of the state-of-the-art NAS algorithms. In addition to the three methods in the main paper, we reimplemented some recent algorithms, such as FBNet (Wu et al., 2018), Single Path One Shot (SPOS) (Guo et al., 2019), and FairNAS (Chu et al., 2019). Note that we removed the FBNet device look-up table and model latency from the objective function since the search for a mobile model is not our primary goal. This also makes it comparable with the other baselines.
To ensure fairness, after the search phase is completed, each method trains the top 1 architectures found by its policy from scratch to obtain ground-truth performance; we repeated all the experiments with 10 random seeds. We report the mean and best top 1 accuracy in Table 6 for a number of nodes n ∈ [4, 7], and the Kendall Tau (K-T) values for one-shot methods following Section 4.2 in the paper. From the results, we observe that: 1) Sampling-based NAS strategies always have better mean accuracy with lower standard deviation, meaning that they converge to a local minimum more easily but do not exploit the entire search space. 2) By contrast, one-shot methods explore more diverse solutions, thus having larger standard deviations but lower means, but are able to pick a better architecture than sampling-based strategies (94.47 for FairNAS and 94.24 for SPOS, vs best of sampler based FBNet 93.98). 3) ENAS constantly improves as the number of nodes increases. 4) FBNet constantly outperforms DARTS, considering the similarity, using Gumbel Softmax seems a better choice. 5) The variance of these algorithms is large and sensitive to initialization. 6) Even one-shot algorithms cannot find the overall best architecture with accuracy 95.06. | 1. What is the focus of the paper regarding Neural Architecture Search (NAS) methods?
2. What are the strengths and weaknesses of the paper's approach and conclusions?
3. Do you have any concerns about the methodology used in the paper, particularly regarding the search space and the absence of certain NAS methods?
4. How does the reviewer assess the novelty and significance of the paper's contributions?
5. Are there any additional experimental evidences or considerations that could enhance the paper's findings? | Review | Review
This paper studies the effectiveness of several Neural Architecture Search (NAS) methods comparing it with that of random policy search. The paper concludes that none of these methods for a CNN (trained using CIFAR-10) and RNN model (trained using PTB) are statistically significantly better than the random search. The authors suggest that this is due to the weight sharing used by the NAS algorithms to accelerate the network training.
This paper is written well with a good discussion of the problem. The problem considered is important and authors have raised the effectiveness of NAS methods correctly. Before this paper, Li and Talwalkar, “Random Search and Reproducibility for Neural Architecture Search” have also compared some of the NAS methods with random search and reported similar concerns.
In this sense, the paper is not novel although I agree this paper has added an additional insight that “weight sharing” is the culprit.
I have two concerns about the methodology used in this paper:
(1) The search space has been greatly, just 32 possible architectures. It is well known that in a small search space, difference between the performance of random search and any other systematic search algorithm is quite small. Only when the space gets larger, the power of systematic search starts to show up. Although, I completely understand the authors’ limitation of not having a ground truth for a large search space (infeasible due to a huge computational requirement), but without this, the claim of this paper is weak.
(2) Secondly, among the NAS methods considered, I missed the whole class of methods based on Bayesian optimization. There are many such work, but I am listing just two of them here: Jin et al. (2018), “AUTO-KERAS: EFFICIENT NEURAL ARCHITECTURE SEARCH WITH NETWORK MORPHISM” and Kandasamy et al. (2018), “Neural Architecture Search with Bayesian Optimisation and Optimal Transport”. It would be useful to have them in the list of NAS methods considered here.
Post Rebuttal:
I have read the rebuttal. I appreciate the authors 'prompt comparison of their method with Bayesian NAS. However, I still think that using a reduced search space, it is not appropriate to compare NAS methods with random search. Moreover, all the claims are only empirical and more experimental evidence needs to be provided to reject the current NAS methods. |
ICLR | Title
Evaluating The Search Phase of Neural Architecture Search
Abstract
Neural Architecture Search (NAS) aims to facilitate the design of deep networks for new tasks. Existing techniques rely on two stages: searching over the architecture space and validating the best architecture. NAS algorithms are currently compared solely based on their results on the downstream task. While intuitive, this fails to explicitly evaluate the effectiveness of their search strategies. In this paper, we propose to evaluate the NAS search phase. To this end, we compare the quality of the solutions obtained by NAS search policies with that of random architecture selection. We find that: (i) On average, the state-of-the-art NAS algorithms perform similarly to the random policy; (ii) the widely-used weight sharing strategy degrades the ranking of the NAS candidates to the point of not reflecting their true performance, thus reducing the effectiveness of the search process. We believe that our evaluation framework will be key to designing NAS strategies that consistently discover architectures superior to random ones.
1 INTRODUCTION
By automating the design of a neural network for the task at hand, Neural Architecture Search (NAS) has tremendous potential to impact the practicality of deep learning (Zoph & Le, 2017; Liu et al., 2018b;a; Tan et al., 2018; Baker et al., 2016), and has already obtained state-of-the-art performance on many tasks. A typical NAS technique (Zoph & Le, 2017; Pham et al., 2018; Liu et al., 2018a) has two stages: the search phase, which aims to find a good architecture, and the evaluation one, where the best architecture is trained from scratch and validated on the test data.
In the literature, NAS algorithms are typically compared based on their results in the evaluation phase. While this may seem intuitive, the search phase of these algorithms often differ in several ways, such as their architecture sampling strategy and the search space they use, and the impact of these individual factors cannot be identified by looking at the downstream task results only. Furthermore, the downstream task results are often reported for a single random seed, which leaves unanswered the question of robustness of the search strategies. ∗Equal contribution †Work done at Swisscom digital lab.
In this paper, we therefore propose to investigate the search phase of existing NAS algorithms in a controlled manner. To this end, we compare the quality of the NAS solutions with a random search policy, which uniformly randomly samples an architecture from the same search space as the NAS algorithms, and then trains it using the same hyper-parameters as the NAS solutions. To reduce randomness, the search using each policy, i.e., random and NAS ones, is repeated several times, with different random seeds.
We perform a series of experiments on the Penn Tree Bank (PTB) (Marcus et al., 1994a) and CIFAR10 (Krizhevsky et al., 2009) datasets, in which we compared the state-of-the-art NAS algorithms whose code is publicly available—DARTS (Liu et al., 2019b), NAO (Luo et al., 2018) and ENAS (Pham et al., 2018)—to our random policy. We reached the surprising conclusions that, as shown in Table 1, none of them significantly outperforms random sampling. Since the mean performance for randomlysampled architectures converges to the mean performance over the entire search space, we further conducted Welch Student’s t-tests (Welch, 1947), which reveal that, in RNN space, ENAS and DARTS cannot be differentiated from the mean of entire search space, while NAO yields worse performance than random sampling. While the situation is slightly better in CNN space, all three algorithms still perform similarly to random sampling. Note that this does not necessarily mean that these algorithms perform poorly, but rather that the search space has been sufficiently constrained so that even a random architecture in this space provides good results. To verify this, we experiment with search spaces where we can exhaustively evaluate all architectures, and observe that these algorithms truly cannot discover top-performing architectures.
In addition to this, we observed that the ranking by quality of candidate architectures produced by the NAS algorithms during the search does not reflect the true performance of these architectures in the evaluation phase. Investigating this further allowed us to identify that weight sharing (Pham et al., 2018), widely adopted to reduce the amount of required resources from thousands of GPU days to a single one, harms the individual networks’ performance. More precisely, using reduced search spaces, we make use of the Kendall Tau τ metric1 to show that the architecture rankings obtained with and without weight sharing are entirely uncorrelated in RNN space (τ = -0.004 over 10 runs); and have little correlation in the CNN space (τ = 0.195 over 10 runs). Since such a ranking is usually treated as training data for the NAS sampler in the search phase, this further explains the small margin between random search and the NAS algorithms. We also show that training samplers without weight sharing in CNN space surpasses random sampling by a significant margin.
In other words, we disprove the common belief that the quality of architectures trained with and without weight sharing is similar. We show that the difference in ranking negatively impacts the search phase of NAS algorithms, thus seriously impeding their robustness and performance.
In short, evaluating the search phase of NAS, which is typically ignored, allowed us to identify two key characteristics of state-of-the-art NAS algorithms: The importance of the search space and the negative impact of weight sharing. We believe that our evaluation framework will be instrumental in designing NAS search strategies that are superior to the random one. Our code is publicly available at https://github.com/kcyu2014/eval-nas.
2 RELATED WORK
Since its introduction in (Zoph & Le, 2017), NAS has demonstrated great potential to surpass the human design of deep networks for both visual recognition (Liu et al., 2018b; Ahmed & Torresani, 2018; Chen et al., 2018; Pérez-Rúa et al., 2018; Liu et al., 2019a) and natural language processing (Zoph & Le, 2017; Pham et al., 2018; Luo et al., 2018; Zoph et al., 2018; Liu et al., 2018b; Cai et al., 2018a). Existing search strategies include reinforcement learning (RL) samplers (Zoph & Le, 2017; Zoph et al., 2018; Pham et al., 2018), evolutionary algorithms (Xie & Yuille, 2017; Real et al., 2017; Miikkulainen et al., 2019; Liu et al., 2018b; Lu et al., 2018), gradient-descent (Liu et al., 2019b), bayesian optimization (Kandasamy et al., 2018; Jin et al., 2019; Zhou et al., 2019) and performance predictors (Liu et al., 2018a; Luo et al., 2018). Here, our goal is not to introduce a new search policy, but rather to provide the means to analyze existing ones. Below, we briefly discuss existing NAS methods and focus on how they are typically evaluated.
Neural architecture search with weight sharing. The potential of vanilla NAS comes with the drawback of requiring thousands of GPU hours even for small datasets, such as PTB and CIFAR-10.
1The Kendall Tau (Kendall, 1938) metric measures the correlation of two ranking. Details in Appendix A.1.
Furthermore, even when using such heavy computational resources, vanilla NAS has to restrict the number of trained architectures from a total of 109 to 104, and increasing the sampler accuracy can only be achieved by increasing the resources.
ENAS (Pham et al., 2018) was the first to propose a training scheme with shared parameters, reducing the resources from thousands of GPU days to one. Instead of being trained from scratch each sampled model inherits the parameters from previously-trained ones. Since then, NAS research has mainly focused on two directions: 1) Replacing the RL sampler with a better search algorithm, such as gradient descent (Liu et al., 2019b), bayesian optimiziation (Zhou et al., 2019) and performance predictors (Luo et al., 2018); 2) Exploiting NAS for other applications, e.g., object detection (Ghiasi et al., 2019; Chen et al., 2019), semantic segmentation (Liu et al., 2019a), and finding compact networks (Cai et al., 2018b; Wu et al., 2018; Chu et al., 2019; Guo et al., 2019).
Characterizing the search space. Ying et al. (2019); Dong & Yang (2020) introduced a dataset that contains the ground-truth performance of CNN cells, and Wang et al. (2019) evaluated some traditional search algorithms on it. Similarly, Radosavovic et al. (2019) characterizes many CNN search spaces by computing the statistics of a set of sampled architectures, revealing that, for datasets such as CIFAR-10 or ImageNet, these statistics are similar. While these works support our claim that evaluation of NAS algorithms is crucial, they do not directly evaluate the state-of-the-arts NAS algorithms as we do here.
Evaluation of NAS algorithms. Typically, the quality of NAS algorithms is judged based on the results of the final architecture they produce on the downstream task. In other words, the search and robustness of these algorithms are generally not studied, with (Liu et al., 2019b; So et al., 2019) the only exception for robustness, where results obtained with different random seeds were reported. Here, we aim to further the understanding of the mechanisms behind the search phase of NAS algorithms. Specifically, we propose doing so by comparing them with a simple random search policy, which uniformly randomly samples one architecture per run in the same search space as the NAS techniques.
While some works have provided partial comparisons to random search, these comparisons unfortunately did not give a fair chance to the random policy. Specifically, (Pham et al., 2018) reports the results of only a single random architecture, and (Liu et al., 2018b) those of an architecture selected among 8 randomly sampled ones as the most promising one after training for 300 epochs only. Here, we show that a fair comparison to the random policy, obtained by training all architectures, i.e., random and NAS ones, for 1000 epochs and averaging over multiple random seeds for robustness, yields a different picture; the state-of-the-art search policies are no better than the random one.
The motivation behind this comparison was our observation of only a weak correlation between the performance of the searched architectures and the ones trained from scratch during the evaluation phase. This phenomenon was already noticed by Zela et al. (2018), and concurrently to our work by Li & Talwalkar (2019); Xie et al. (2019); Ying et al. (2019), but the analysis of its impact or its causes went no further. Here, by contrast, we link this difference in performance between the search and evaluation phases to the use of weight sharing.
While this may seem to contradict the findings of Bender et al. (2018), which, on CIFAR-10, observed a strong correlation between architectures trained with and without weight sharing when searching a CNN cell, our work differs from (Bender et al., 2018) in two fundamental ways: 1) The training scheme in (Bender et al., 2018), in which the entire model with shared parameters is trained via random path dropping, is fundamentally different from those used by state-of-the-arts weight sharing NAS strategies (Pham et al., 2018; Liu et al., 2019b; Luo et al., 2018); 2) While the correlation in (Bender et al., 2018) was approximated using a small subset of sampled architectures, we make use of a reduced search space where we can perform a complete evaluation of all architectures, thus providing an exact correlation measure in this space.
3 EVALUATING THE NAS SEARCH
In this section, we detail our evaluation framework for the NAS search phase. As depicted in Fig. 1(a,b), typical NAS algorithms consist of two phases:
• Search: The goal of this phase is to find the best candidate architecture from the search space2. This is where existing algorithms, such as ENAS, DARTS and NAO, differ. Nevertheless, for all the algorithms, the search depends heavily on initialization. In all the studied policies, initialization is random and the outcome thus depends on the chosen random seed.
• Evaluation: In this phase, all the studied algorithms retrain the best model found in the search phase. The retrained model is then evaluated on the test data.
The standard evaluation of NAS techniques focuses solely on the final results on the test data. Here, by contrast, we aim to evaluate the search phase itself, which truly differentiates existing algorithms.
To do this, as illustrated in Fig. 1(c), we establish a baseline; we compare the search phase of existing algorithms with a random search policy. An effective search algorithm should yield a solution that clearly outperforms the random policy. Below, we introduce our framework to compare NAS search algorithms with random search. The three NAS algorithms that we evaluated, DARTS (Liu et al., 2019b), NAO (Luo et al., 2018) and ENAS (Pham et al., 2018), are representative of the state of the art for different search algorithms: reinforcement learning, gradient-descent and performance prediction, and are discussed in Appendix C.
3.1 COMPARING TO RANDOM SEARCH
We implement our random search policy by simply assigning uniform probabilities to all operations. Then, for each node in the Directed Acyclic Graph (DAG) that is typically used to represent an architecture, we randomly sample a connection to one previous node from the resulting distributions.
An effective search policy should outperform the random one. To evaluate this, we compute the validation results of the best architecture found by the NAS algorithm trained from scratch, as well as those of a single randomly sampled architecture. Comparing these values for a single random seed would of course not provide a reliable measure. Therefore, we repeat this process for multiple random seeds used both during the search phase of the NAS algorithm and to sample one random architecture as described above. We then report the means and standard deviations of these results over the different seeds. Note that while we use different seeds for the search and random sampling, we always use the same seed when training the models from scratch during the evaluation phase.
Our use of multiple random seeds and of the same number of epochs for the NAS algorithms and for our random search policy makes the comparison fair. This contrasts with the comparisons performed in (Pham et al., 2018), where the results of only a single random architecture were reported, and
2Details about search spaces are provided in Appendix B.
in (Liu et al., 2019b), which selected a single best random architecture among an initial set of 8 after training for 300 epochs only. As shown in Appendix D.2, some models that perform well in the early training stages may yield worse performance than others after convergence. Therefore, choosing the best random architecture after only 300 epochs for PTB and 100 for CIFAR-10, and doing so for a single random seed, might not be representative of the general behavior.
3.2 SEARCH IN A REDUCED SPACE
Because of the size of standard search spaces, one cannot understand the quality of the search by fully evaluating all possible solutions. Hence, we propose to make use of reduced search spaces with ground-truth architecture performances available to evaluate the search quality. For RNNs, we simply reduce the number of nodes in the search space from 12 to 2. Given that each node is identified by two values, the ID of the incoming node and the activation function, the space has a cardinality |S| = n! ∗ |O|n, where n = 2 nodes and |O| = 4 operations, thus yielding 32 possible solutions. To obtain ground truth, we train all of these architectures individually. Each architecture is trained 10 times with a different seed, which therefore yields a mean and standard deviation of its performance. The mean value is used as ground truth—the actual potential of the given architecture. These experiments took around 5000 GPU hours.
For CNNs, we make use of NASBench-101 (Ying et al., 2019), a CNN graph-based search space with 3 possible operations, conv3x3, conv1x1 and max3x3. This framework defines search spaces with between 3 and 7 nodes, with 423,624 architectures in 7-node case. To the best of our knowledge, we are the first to evaluate the NAS methods used in this paper on NASBench.
4 EXPERIMENTAL RESULTS
To analyze the search phase of the three state-of-the-art NAS algorithms mentioned above, we first compare these algorithms to our random policy when using standard search spaces for RNNs on (PTB) and CNNs on CIFAR-10. Details about the experiment setting are in Appendix C.5. The surprising findings in this typical NAS use case prompted us to study the behavior of the search strategies in reduced search spaces. This allowed us to identify a factor that has a significant impact on the observed results: Weight sharing. We then quantify this impact on the ranking of the NAS candidates, evidencing that it dramatically affects the effectiveness of the search.
4.1 NAS COMPARISON IN A STANDARD SEARCH SPACE
Below, we compare DARTS (Liu et al., 2019b), NAO (Luo et al., 2018), ENAS (Pham et al., 2018) and BayesNAS (Zhou et al., 2019) with our random search policy, as discussed in Section 3.1. We follow (Liu et al., 2019b) to define an RNN search space of 12 nodes and a CNN ones of 7 nodes. For each of the four search policies, we run 10 experiments with a different initialization of the sampling policy. During the search phase, we used the authors-provided hyper-parameters and code for each policy. Once a best architecture is identified by the search phase, it is used for evaluation, i.e., we train the chosen architecture from scratch for 1000 epochs for RNN and 600 for CNN.
RNN Results. In Figure 2, we plot, on the left, the mean perplexity evolution over the 1000 epochs, obtained by averaging the results of the best architectures found using the 10 consecutive seeds.3 On the right, we show the perplexity evolution for the best cell of each strategy among the 10 different runs. Random sampling is robust and consistently competitive. As shown in Table 1, it outperforms on average the DARTS and NAO policies, and yields the overall best cell for these experiments with perplexity
3Starting from 1268, which is right after 1267, the seed released by Liu et al. (2019b). Note that, using this seed, we can reproduce the DARTS RNN search and obtain a validation PPL of 55.7 as in (Liu et al., 2019b).
of 57.60. Further training this cell for 4000 epochs, as in (Liu et al., 2019b), yields a perplexity of 55.93. The excellent performance of the random policy evidences the high expressiveness of the manually-constructed search space; even arbitrary policies in this space perform well, as evidenced by the relatively low standard deviation over the 10 seeds of the random architectures, shown in Table 1 and Figure 2(left).
CNN Results. In Table 2, we compare the NAS methods with our random policy in the search space of Liu et al. (2019b). We provide the accuracy reported in the original papers as well as the accuracy we reproduced using our implementation. Note that the NAS algorithms only marginally outperform random search, by less than 0.5% in top-1 accuracy. The best architecture was discovered by NAO, with an accuracy of 97.10%, again less than 0.5% higher than the randomly discovered one. Note that, our random sampling comes at no search cost. By contrast, Li & Talwalkar (2019) obtained an accuracy of 97.15% with a different random search policy having the same cost as DARTS.
Observations:
• The evaluated state-of-the-art NAS algorithms do not surpass random search by a significant margin, and even perform worse in the RNN search space. • The ENAS policy sampler has the lowest variance among the three tested ones. This shows that ENAS is more robust to the variance caused by the random seed of the search phase. • The NAO policy is more sensitive to the search space; while it yields the best performance in CNN space, it performs the worst in RNN one. • The DARTS policy is very sensitive to random initialization, and yields the largest standard deviation across the 10 runs (2.54 in RNN and 0.23 in CNN space).
Such a comparison of search policies would not have been possible without our framework. Nevertheless, the above analysis does not suffice to identify the reason behind these surprising observations. As mentioned before, one reason could be that the search space has been sufficiently constrained so that all architectures perform similarly well. By contrast, if we assume that the search space does contain significantly better architectures, then we can conclude that these search algorithms truly fail to find a good one. To answer this question, we evaluate these methods in a reduced search space, where we can obtain the true performance of all possible architectures.
4.2 SEARCHING A REDUCED SPACE
The results in the previous section highlight the inability of the studied methods to surpass random search. Encouraged by these surprising results, we then dig deeper into their causes. Below, we make use of search spaces with fewer nodes, which we can explore exhaustively.
Reduced RNN space. We use the same search space as in Section 3.2 but reduce the number of intermediate nodes to 2. In Table 3 (A), we provide the results of searching the RNN 2-node space. Its smaller size allows us to exhaustively compute the results of all possible solutions, thus determining the upper bound for
this case. In Figure 3, we plot the rank of the top 1 architecture discovered by the three NAS algorithms for each of the 10 different runs.
We observe that: (i) All policies failed to find the architecture that actually performs best; (ii) The ENAS policy always converged to the same architecture. This further evidences the robustness of ENAS to the random seed; (iii) NAO performs better than random sampling on average because it keeps a ranking of architectures; (iv) DARTS never discovered a top-5 architecture.
Reduced CNN space. In Table 3 (B), we report the mean and best test top-1 accuracy over 10 different runs on the NASBench-101 7-node space. To assess the search performance, we also show the best architecture rank in the entire space. The best test accuracy found by these methods is 93.33, by NAO, which remains much lower than the ground-truth best of 95.06. In terms of ranking, the best rank of these methods across 10 runs is 19522, which is among the top 4% architectures and yields a probability of 0.62 to surpass a randomly-sampled one given the same search budget. Note that ENAS and DARTS only have 7% and 24% chance to surpass the random policy. See Appendix A.2 for the definition of this probability, and Appendix D.3 for detailed results.
NAO seems to constantly outperform random search in the reduced space. Nevertheless, the final architecture chosen by NAO is always one of the architectures from the initial pool, which were sampled uniformly randomly. This indicates that the ranking of NAO is not correctly updated throughout the search and that, in practice, in a reduced space, NAO is similar to random search.
4.3 IMPACT OF WEIGHT SHARING
Our previous experiments in reduced search spaces highlight that the ranking of the searched architectures does not reflect the ground-truth one. As we will show below, this can be traced back to weight sharing, which all the tested algorithms, and the vast majority of existing ones, rely on. To evidence this, we perform the following experiments:
Without WS: We make use of the reduced space, where we have the architecture’s real performance.
With WS: We train the architectures in parallel, using the weight sharing strategy employed in NAO and ENAS. As DARTS does not have discrete representations of the solutions during the search, the idea of solution ranking does not apply. During training, each mini-batch is given to an architecture uniformly sampled from the search space. We repeat the process 10 times, with 10 random seeds and train the shared weights for 1000 epochs for the RNN experiments and 200 epochs for the CNN ones. Note that, this approach is equivalent to Single Path One Shot (SPOS) (Guo et al., 2019). It guarantees equal expectations of the number of times each architecture is sampled, thus overcoming the bias due to unbalanced training resulting from ineffective sampling policies.
We then compute the correlation between the architecture rankings found with WS and the ground truth (i.e., the architectures trained independently). For each of the 10 runs of the weight sharing strategy, we evaluate the Kendall Tau metric (defined in Appendix A.1) of the final rankings with respect to the real averaged ranking.
RNN Results. In Figure 4(a), we depict the architecture performance obtained without WS (sorted in ascending order of average validation perplexity), and the corresponding performance with WS. In Figure 4(b), we show the rank difference, where the best and worst were found using the Kendall Tau metric, and show a concrete rank change example in Figure 4(c).
CNN Results. We report the average Kendall tau across 10 different runs. Note that we sampled up to 200 architectures for each experiment and fully evaluated on the entire test set to use the test accuracy for ranking. The Kendall tau for search spaces from 3 to 7 nodes is, respectively, 0.441, 0.314, 0.214, 0.195. We also provide other statistics in Table 6 of Appendix D.3.
Since NAO and ENAS intrinsically disentangle the training of shared weights and sampler, to further confirm the negative effect of weight sharing, we adapt these algorithms to use the architecture’s performance in the NASBench dataset to train their sampler. Table 4 evidences that, after removing weight sharing, both ENAS and NAO consistently discover a good architecture, as indicated by a small difference between the best over 10 runs and the mean performance. More interestingly, for the 7-node case, the best cell discovered (94.11% by NAO and 94.04% by ENAS) are more than 1% higher than the best cells found with weight sharing (93.33 and 92.54, respectively, in Table 3).
Observations:
• The difference of architecture performance is not related to the use of different random seeds, as indicated by the error bars in Figure 4(a).
• WS never produces the true ranking, as evidenced by the Best case in Figure 4(b). • The behavior of the WS rankings is greatly affected by changing the seed. In particular, the
Kendall Tau for the plots in Figure 4(b) are 0.282,−0.004,−0.116 for Best, Average and Worst. • For RNNs, the Kendall Tau are close to 0, which suggests a lack of correlation between the WS
rankings and the true one. By contrast, for CNNs, the correlation is on average higher than for RNNs. This matches the observation in Section 4.1 that CNN results are generally better than RNN ones.
• In a reduced CNN space, the ranking disorder increases with the space complexity, i.e., this disorder is proportional to the amount of weight sharing.4
• If we train NAO and ENAS without weight sharing in NASBench, on average the performance is 1% higher than them with it. This further evidences that weight sharing negatively impacts the sampler, and with a good ranking, the sampler can be trained better. Furthermore, the probability to surpass random search increases from 0.62 to 0.92 for NAO and from 0.07 to 0.90 for ENAS.
4We also conduct another experiment regarding to the amount of sharing in Appendix D.1
Together with previous results, we believe that these results evidence the negative impact of weight sharing; it dramatically affects the performance of the sampled architectures, thus complicating the overall search process and leading to search policies that are no better than the random one.
5 CONCLUSION
In this paper, we have analyzed the effectiveness of the search phase of NAS algorithms via fair comparisons to random search. We have observed that, surprisingly, the search policies of state-ofthe-art NAS techniques are no better than random, and have traced the reason for this to the use of (i) a constrained search space and (ii) weight sharing, which shuffles the architecture ranking during the search, thus negatively impacting it.
In essence, our gained insights highlight two key properties of state-of-the-art NAS strategies, which had been overlooked in the past due to the single-minded focus of NAS evaluation on the results on the target tasks. We believe that this will be key to the development of novel NAS algorithms. In the future, we will aim to do so by designing relaxed weight sharing strategies.
6 ACKNOWLEDGEMENT
This work was supported in part by the Swiss National Science Foundation. We would also like to thank Rene Ranftl and Vladlen Koltun for the discussions and support.
A METRICS TO EVALUATE NAS ALGORITHMS
A.1 KENDALL TAU METRIC
As a correlation measure, we make use of the Kendall Tau (τ ) metric (Kendall, 1938): a number in the range [-1, 1] with the following properties:
• τ = −1: Maximum disagreement. One ranking is the opposite of the other. • τ = 1: Maximum agreement. The two rankings are identical. • τ close to 0: A value close to zero indicates the absence of correlation.
A.2 PROBABILITY TO SURPASS RANDOM SEARCH
As discussed in Section 3.2, the goal of NASBench is to search for a CNN cell with up to 7 nodes and 3 operations, resulting in total 423,624 architectures. Each architecture is trained 3 times with different random initialization up to 108 epochs on the CIFAR-10 training set, and evaluated on the test split. Hence, the average test accuracy of these runs can be seen as the ground-truth performances. In our experiments, we use this to rank the architectures, from 1 (highest accuracy) to 423,624. Given the best architecture’s rank r after n runs, and maximum rank rmax equals to the total number of architectures, the probability that the best architecture discovered is better than a randomly searched one given the same budget is given by
p = 1− (1− (r/rmax))n. (1) We use this as a new metric to evaluate the search phase.
B NAS SEARCH SPACE REPRESENTATION
As discussed in the main paper, our starting point is a neural search space for a neural architecture, as illustrated in Figure 5. A convolutional cell can be represented with a similar topological structures. Following common practice in NAS (Zoph & Le, 2017), a candidate architecture sampled from this space connects the input and the output nodes through a sequence of intermediary ones. Each node is connected to others and has an operation attached to it.
A way of representing this search space (Pham et al., 2018; Luo et al., 2018), depicted in Figure 5(b), is by using strings. Each character in the string indicates either the node ID that the current node is connected to, or the operation selected for the current node. Operations include the identity, sigmoid, tanh and ReLU (Nair & Hinton, 2010).
Following the alternative way introduced in (Liu et al., 2019b), we make use of a vectorized representation of these strings. More specifically, as illustrated by Figure 5(c), a node ID, resp. an operation, is encoded as a vector of probabilities over all node IDs, resp. all operations. For instance, the connection between nodes i and j is represented as y(i,j)(x) = ∑ o∈O poo(x), with O the set of all
operations, and po = softmax(αo) = exp(αo)/ ∑ o′∈O exp(αo′) the probability of each operation.
C NAS ALGORITHMS
Here, we discuss the three state-of-the-art NAS algorithms used in our experiments in detail, including their hyper-parameters during the search phase. The current state-of-the-arts NAS on CIFAR-10 is ProxylessNAS (Cai et al., 2018b) with a top-1 accuracy of 97.92. However, this algorithm inherits the sampler from ENAS and DARTS, but with a different objective function, backbone model, and search space. In addition, the code is not publicly available, which precludes us from directly evaluating it.
C.1 ENAS
adopts a reinforcement learning sampling strategy that is updated with the REINFORCE algorithm. The sampler is implemented as a two-layer LSTM Hochreiter & Schmidhuber (1997) and generates a sequence of strings. In the training process, each candidate sampled by the ENAS controller is trained on an individual mini-batch. At the end of each epoch, the controller samples new architectures
that are evaluated on a single batch of the validation dataset. After this, the controller is updated accordingly using these validation metrics. We refer the reader to (Pham et al., 2018) for details about the hyper-parameter settings.
C.2 DARTS
It vectorizes the aforementioned strings as discussed in Section B and shown in Fig. 5(c). The sampling process is then parameterized by the vector α, which is optimized via gradient-descent in a dual optimization scheme: The architecture is first trained while fixing α, and α is then updated while the network is fixed. This process is repeated in an alternating manner. In the evaluation phase, DARTS samples the top-performing architecture by using the trained α vector as probability prior, i.e., the final model is not a soft average of all paths but one path in the DAG, which makes its evaluation identical to that of the other NAS algorithms. Note that we use the same hyper-parameters as in the released code of Liu et al. (2019b).
C.3 NAO
It implements a gradient-descent algorithm, but instead of vectorizing the strings as in DARTS, it makes use of a variational auto-encoder (VAE) to learn a latent representation of the candidate architectures. Furthermore, it uses a performance predictor, which takes a latent vector as input to predict the corresponding architecture performance. In short, the search phase of NAO consists of first randomly sampling an initial pool of architectures and training them so as to obtain a ranking. This ranking is then used to train the encoder-predictor-decoder network, from which new candidates are sampled, and the process is repeated in an iterative manner. The best architecture is then taken as the top-1 in the NAO ranking. We directly use the code released by Luo et al. (2018).
C.4 BAYESNAS
Bayesian optimization was first introduced to the neural architecture search field by Kandasamy et al. (2018) and Jin et al. (2019). We chose to evaluate BayesNAS (Zhou et al., 2019) because it is more recent than Auto-Keras (Jin et al., 2019) and than the work of Kandasamy et al. (2018), and because these two works use different search spaces than DARTS, resulting in models with significantly worse performance than DARTS. BayesNAS adopts Bayesian optimization to prune the fully-connected DAG graph using the shared weights to obtain accuracy metrics. The search space follows that of DARTS (Liu et al., 2019b) with minor modifications in connections, but exactly the same operations. Please see (Zhou et al., 2019) for more details. Note that BayesNAS was only implemented in CNN
space. We use the search and model code released by Zhou et al. (2019) with our training pipeline, since the authors did not release the training code.
C.5 EXPERIMENTAL SETUP
Following common practice in NAS, we make use of the word-level language modeling Penn Tree Bank (PTB) dataset (Marcus et al., 1994b) and of the image classification CIFAR-10 dataset (Krizhevsky et al., 2009). For these datasets, the goals are, respectively, finding a recurrent cell that correctly predicts the next word given the input sequence, and finding a convolutional cell that maximizes the classification accuracy. The quality of a candidate is then evaluated using the perplexity metric and top-1 accuracy, respectively.
In the evaluation phase, we always use the same model backbone and parameter initialization for all searched architectures, which ensures fairness and reflects the empirical observation that the searched models are insensitive (accuracy variations of less than 0.002 (Liu et al., 2019b)) to initialization during evaluation. For our RNN comparisons, we follow the procedure used in (Liu et al., 2019b; Pham et al., 2018; Luo et al., 2018) for the final evaluation, consisting of keeping the connections found for the best architecture in the search phase but increasing the hidden state size (to 850 in practice), so as to increase capacity. Furthermore, when training an RNN architecture from scratch, we follow (Yang et al., 2017; Merity et al., 2017) and first make the use of standard SGD to speed up training, and then change to average SGD to improve convergence. For all CNN architectures, we use RMSProp for fast optimization (Ying et al., 2019) and enable auxiliary head and cut-out (DeVries & Taylor, 2017) to boost the performance as in Liu et al. (2019b).
C.6 ADAPTATION TO REDUCED SEARCH SPACE
When changing to reduced search spaces, we adapted the evaluated search algorithms to achieve the best performance. Below, we describe these modifications.
RNN reduced space
• For DARTS, no changes are needed except modifying the number of nodes in the search space. • For NAO, to mimic the behavior of the algorithm in the space of 12 nodes, we randomly sample
20% of the possible architectures to define the initial candidate pool. We train the encoderpredictor-decoder network for 250 iterations every 50 epochs using the top-4 architectures in the NAO ranking. At each search iteration, we sample at most 3 new architectures to be added to the pool. The rest of the search logic remains unchanged. • For ENAS, we reduce the number of architectures sampled in one epoch to 20 and increase the number of batches to 10 for each architecture. All other hyper-parameters are unchanged.
CNN reduced space
• For DARTS, again, no changes are needed except modifying the number of nodes in the search space. • For NAO, since the topology of the NASBench space is very similar to the original search space, we kept most of the parameters unchanged, but only change the embedding size of the encoder proportionally to the number of nodes (12 × node - 12). • For ENAS, we set the LSTM sampler size to 64 and keep the temperature as 5.0. The number of aggregation step of each sampler training is set to 10.
D SUPPLEMENTARY EXPERIMENTS
We provide additional experiments to support our claims.
D.1 INFLUENCE OF THE AMOUNT OF SHARING
Depending on the active connections in the DAG, different architectures are subject to different amounts of weight sharing. In Figure 6 (a), let us consider the 3-node case, with node 1 and node 2
fixed and node 3 having node 1 as incoming node. In this scenario, the input to node 3 can be either directly node 0 (i.e., the input), or node 1, or node 2. In the first case, the only network parameters that the output of node 3 depends on are the weights of its own operation. In the second and third cases, however, the output further depends on the parameters of node 1, and of nodes 1 and 2, respectively.
To study the influence of the amount of sharing on the architecture ranking, we performed an experiment where we fixed the first two nodes and only searched for the third one. This represents a space of 12 architectures (3 possible connections to node 3 × 4 operations). We train them using the same setting in Section 4.3. The ranking of the 12 architectures is shown in Figure 6 (b), where color indicates the number of shared weight matrices, that is, matrices of nodes 1 and 2 also used in the search for node 3. Note that the top-performing architectures do not share any weights and that the more weights are shared, the worse the architecture performs.
In CNN space, we conduct a similar experiment in NASBench. With total node equals to 6, we only permute the last node operation and connection to one of the previous nodes. In short, we will have a total 4 connection possibility and 3 operation choices, in total 12 architectures. We compute the Kendall Tau among the architectures with the same con-
nection but different operations, and the results are reported in Table 5. Clearly, the correlation of architectures decrease while the weight sharing matrices increase.
D.2 RANDOM SAMPLING COMPARISON
As discussed before, the random policy in (Liu et al., 2019b) samples 8 architectures, and picks the best after training them for 300 epochs independently. It might seem contradictory that DARTS outperforms this random policy, but cannot surpass the much simpler one designed in our paper, which only randomly samples 10 architectures (1 per random seed), trains them to convergence and picks the best. However, the random policy in DARTS relies on the assumption that a model that performs well in the early training stage will remain effective until the end of training. While this may sound intuitive, we observed a different picture with our reduced search space.
Since we obtained the ground-truth performance ranking, as discussed in Section 4.2 of the main paper, in Figure 7, we plot the evolution of models’ rank while training proceeds, based on the average validation perplexity over 10 runs. Clearly, there are significant variations during training: Good models in early stages drop lower in the ranking towards the end. As such, there is a non-negligible chance that the random policy in DARTS
15
picks a model whose performance will be sub-optimal. We therefore believe that our policy that simply samples one model and trains it until convergence yields a more fair baseline. Furthermore, the fact that we perform our comparison using 10 random seeds, for both our approach and the NAS algorithms, vs a single one in (Liu et al., 2019b) makes our conclusions more reliable.
D.3 NASBENCH DETAILED RESULTS.
We provide additional evaluations on the NASBench dataset to benchmark the performance of the state-of-the-art NAS algorithms. In addition to the three methods in the main paper, we reimplemented some recent algorithms, such as FBNet (Wu et al., 2018), Single Path One Shot (SPOS) (Guo et al., 2019), and FairNAS (Chu et al., 2019). Note that we removed the FBNet device look-up table and model latency from the objective function since the search for a mobile model is not our primary goal. This also makes it comparable with the other baselines.
To ensure fairness, after the search phase is completed, each method trains the top 1 architectures found by its policy from scratch to obtain ground-truth performance; we repeated all the experiments with 10 random seeds. We report the mean and best top 1 accuracy in Table 6 for a number of nodes n ∈ [4, 7], and the Kendall Tau (K-T) values for one-shot methods following Section 4.2 in the paper. From the results, we observe that: 1) Sampling-based NAS strategies always have better mean accuracy with lower standard deviation, meaning that they converge to a local minimum more easily but do not exploit the entire search space. 2) By contrast, one-shot methods explore more diverse solutions, thus having larger standard deviations but lower means, but are able to pick a better architecture than sampling-based strategies (94.47 for FairNAS and 94.24 for SPOS, vs best of sampler based FBNet 93.98). 3) ENAS constantly improves as the number of nodes increases. 4) FBNet constantly outperforms DARTS, considering the similarity, using Gumbel Softmax seems a better choice. 5) The variance of these algorithms is large and sensitive to initialization. 6) Even one-shot algorithms cannot find the overall best architecture with accuracy 95.06. | 1. What is the focus of the paper regarding neural architecture search?
2. What are the strengths and weaknesses of the proposed approach in the paper?
3. How does the reviewer assess the novelty and impact of the paper's contributions?
4. Are there any concerns or questions about the experimental design and results presented in the paper?
5. Can the insights from the paper be applied to improve neural architecture search methods in practice? | Review | Review
This paper studies an important problem, evaluating the performance of existing neural architecture search algorithms against a random sampling algorithm fairly.
Neural architecture search usually involves two phases: model search and model tuning. In the search phase, best architectures after limited training are selected. In model tuning, the selected architectures are trained fully. However, it has been noticed that best architectures after limited training may not translate to globally best architectures. Although previous research has tried comparing to random sampling, such as Liu et al. 2019b, but the random architectures were not trained fully. The authors train random architectures fully before selecting the best one, which turns out to perform as well or better than the sophisticated neural architecture search methods. The paper also identifies that parameter sharing turns out to be a major reason why the sophisticated NAS methods do not really work well.
The insights are obviously important and valuable. The insight on parameter sharing is even a bit disheartening. Parameter sharing is the main reason why NAS can scale to very large domains. Without it, is NAS still practical or useful? On the other hand, it is a bit unsatisfactory that the paper does not provide or even suggest solutions to remedy the identified issues.
Another comment is it is a stretch to consider the evaluation done in the paper a new framework. It is simply a new baseline plus a new experiment design.
About Equation (1) in Appendix A.2, it seems to simplify to p=(r/r_max)^n. Is the formula correct? |
ICLR | Title
On-Policy Trust Region Policy Optimisation with Replay Buffers
Abstract
Building upon the recent success of deep reinforcement learning methods, we investigate the possibility of on-policy reinforcement learning improvement by reusing the data from several consecutive policies. On-policy methods bring many benefits, such as ability to evaluate each resulting policy. However, they usually discard all the information about the policies which existed before. In this work, we propose adaptation of the replay buffer concept, borrowed from the off-policy learning setting, to create the method, combining advantages of onand off-policy learning. To achieve this, the proposed algorithm generalises the Q-, value and advantage functions for data from multiple policies. The method uses trust region optimisation, while avoiding some of the common problems of the algorithms such as TRPO or ACKTR: it uses hyperparameters to replace the trust region selection heuristics, as well as the trainable covariance matrix instead of the fixed one. In many cases, the method not only improves the results comparing to the state-of-the-art trust region on-policy learning algorithms such as PPO, ACKTR and TRPO, but also with respect to their off-policy counterpart DDPG.
1 INTRODUCTION
The past few years have been marked by active development of reinforcement learning methods. Although the mathematical foundations of reinforcement learning have been known long before (Sutton & Barto, 1998), starting from 2013, the novel deep learning techniques allowed to solve vision based discrete control tasks such as Atari 2600 games (Mnih et al., 2013) as well as continuous control problems (Lillicrap et al., 2015; Mnih et al., 2016). Many of the leading state-of-the-art reinforcement learning methods share the actor-critic architecture (Crites & Barto, 1995). Actorcritic methods separate the actor, providing a policy, and the critic, providing an approximation for the expected discounted cumulative reward or some derived quantities such as advantage functions (Baird III, 1993). However, despite improvements, state-of-the-art reinforcement learning still suffers from poor sample efficiency and extensive parameterisation. For most real-world applications, in contrast to simulations, there is a need to learn in real time and over a limited training period, while minimising any risk that would cause damage to the actor or the environment.
Reinforcement learning algorithms can be divided into two groups: on-policy and off-policy learning. On-policy approaches (e. g., SARSA (Rummery & Niranjan, 1994), ACKTR (Wu et al., 2017)) evaluate the target policy by assuming that future actions will be chosen according to it, hence the exploration strategy must be incorporated as a part of the policy. Off-policy methods (e. g., Qlearning (Watkins, 1989), DDPG (Lillicrap et al., 2015)) separate the exploration strategy, which modifies the policy to explore different states, from the target policy.
The off-policy methods commonly use the concept of replay buffers to memorise the outcomes of the previous policies and therefore exploit the information accumulated through the previous iterations (Lin, 1993). Mnih et al. (2013) combined this experience replay mechanism with Deep Q-Networks (DQN), demonstrating end-to-end learning on Atari 2600 games. One limitation of DQN is that it can only operate on discrete action spaces. Lillicrap et al. (2015) proposed an extension of DQN to handle continuous action spaces based on the Deep Deterministic Policy Gradient (DDPG). There, exponential smoothing of the target actor and critic weights has been introduced to ensure stability of the rewards and critic predictions over the subsequent iterations. In order to improve the variance of policy gradients, Schulman et al. (2015b) proposed a Generalised Advantage Function. Mnih
et al. (2016) combined this advantage function learning with a parallelisation of exploration using differently trained actors in their Asynchronous Advantage Actor Critic model (A3C); however, Wang et al. (2016) demonstrated that such parallelisation may also have negative impact on sample efficiency. Although some work has been performed on improvement of exploratory strategies for reinforcement learning (Hester et al., 2013), but it still does not solve the fundamental restriction of inability to evaluate the actual policy, neither it removes the necessity to provide a separate exploratory strategy as a separate part of the method.
In contrast to those, state-of-the-art on-policy methods have many attractive properties: they are able to evaluate exactly the resulting policy with no need to provide a separate exploration strategy. However, they suffer from poor sample efficiency, to a larger extent than off-policy reinforcement learning. TRPO method (Schulman et al., 2015a) has introduced trust region policy optimisation to explicitly control the speed of policy evolution of Gaussian policies over time, expressed in a form of Kullback-Leibler divergence, during the training process. Nevertheless, the original TRPO method suffered from poor sample efficiency in comparison to off-policy methods such as DDPG. One way to solve this issue is by replacing the first order gradient descent methods, standard for deep learning, with second order natural gradient (Amari, 1998). Wu et al. (2017) used a Kroneckerfactored Approximate Curvature (K-FAC) optimiser (Martens & Grosse, 2015) in their ACKTR method. PPO method (Schulman et al., 2017) proposes a number of modifications to the TRPO scheme, including changing the objective function formulation and clipping the gradients. Wang et al. (2016) proposed another approach in their ACER algorithm: in this method, the target network is still maintained in the off-policy way, similar to DDPG (Lillicrap et al., 2015), while the trust region constraint is built upon the difference between the current and the target network.
Related to our approach, recently a group of methods has appeared in an attempt to get the benefits of both groups of methods. Gu et al. (2017) propose interpolated policy gradient, which uses the weighted sum of both stochastic (Sutton et al., 2000) and deterministic policy gradient (Silver et al., 2014). Nachum et al. (2018) propose an off-policy trust region method, Trust-PCL, which exploits off-policy data within the trust regions optimisation framework, while maintaining stability of optimisation by using relative entropy regularisation.
While it is a common practice to use replay buffers for the off-policy reinforcement learning, their existing concept is not used in combination with the existing on-policy scenarios, which results in discarding all policies but the last. Furthermore, many on-policy methods, such as TRPO (Schulman et al., 2015a), rely on stochastic policy gradient (Sutton et al., 2000), which is restricted by stationarity assumptions, in a contrast to those based on deterministic policy gradient (Silver et al., 2014), like DDPG (Lillicrap et al., 2015). In this article, we describe a novel reinforcement learning algorithm, allowing the joint use of replay buffers with trust region optimisation and leading to sample efficiency improvement. The contributions of the paper are given as follows:
1. a reinforcement learning method, enabling replay buffer concept along with on-policy data;
2. theoretical insights into the replay buffer usage within the on-policy setting are discussed;
3. we show that, unlike the state-of-the-art methods as ACKTR (Wu et al., 2017), PPO (Schulman et al., 2017) and TRPO (Schulman et al., 2015a), a single non-adaptive set of hyperparameters such as the trust region radius is sufficient for achieving better performance on a number of reinforcement learning tasks.
As we are committed to make sure the experiments in our paper are repeatable and to further ensure their acceptance by the community, we will release our source code shortly after the publication.
2 BACKGROUND
2.1 ACTOR-CRITIC REINFORCEMENT LEARNING
Consider an agent, interacting with the environment by responding to the states st, t ≥ 0, from the state space S, which are assumed to be also the observations, with actions at from the action space A chosen by the policy distribution πθ(·|st), where θ are the parameters of the policy. The initial state distribution is ρ0 : S → R. Every time the agent produces an action, the environment gives back a reward r(st, at) ∈ R, which serves as a feedback on how good the action choice was and
switches to the next state st+1 according to the transitional probability P (st+1|st, at). Altogether, it can be formalised as an infinite horizon γ-discounted Markov Decision Process (S,A, P, r, ρ0, γ), γ ∈ [0, 1) (Wu et al., 2017; Schulman et al., 2015a). The expected discounted return (Bellman, 1957) is defined as per Schulman et al. (2015a):
ρ(π) = Es0,a0,··· [ ∞∑ t=0 γtr(st, at) ] (1)
The advantage function Aπ (Baird III, 1993), the value function V π and the Q-function Qπ are defined as per Mnih et al. (2016); Schulman et al. (2015a):
Aπ(s, a) = Qπ(s, a)− V π(s), (2)
Qπ(st, at) = Est+1,at+1,... [ ∞∑ l=0 γlr(st+l, at+l) ] , t ≥ 0, (3)
V π(st) = Eat,st+1,... [ ∞∑ l=0 γlr(st+l, at+l) ] , t ≥ 0 (4)
In all above definitions s0 ∼ ρ0(s0), at ∼ π(at|st), st+1 ∼ P (st+1|st, at), and the policy π = πθ is defined by its parameters θ.
2.2 TRUST REGION POLICY OPTIMISATION (TRPO)
A straightforward approach for learning a policy is to perform unconstrained maximisation ρ(πθ) with respect to the policy parameters θ. However, for the state-of-the-art iterative gradient-based optimisation methods, this approach would lead to unpredictable and uncontrolled changes in the policy, which would impede efficient exploration. Furthermore, in practice the exact values of ρ(πθ) are unknown, and the quality of its estimates depends on approximators which tend to be correct only in the vicinity of parameters of observed policies.
Schulman et al. (2015a), based on theorems by Kakade (2002), prove the minorisation-maximisation (MM) algorithm (Hunter & Lange, 2004) for policy parameters optimisation. Schulman et al. (2015a) mention that in practice the algorithm’s convergence rate and the complexity of maximum KL divergence computations makes it impractical to apply this method directly. Therefore, they proposed to replace the unconstrained optimisation with a similar constrained optimisation problem, the Trust Region Policy Optimisation (TRPO) problem:
arg max θ ρ(πθ) (5) DKL(πθold , πθ) ≤ δ, (6) where DKL is the KL divergence between the old and the new policy πθold and πθ respectively, and δ is the trust region radius. Despite this improvement, it needs some further enhancements to solve this problem efficiently, as we will elaborate in the next section.
2.3 SECOND ORDER ACTOR-CRITIC NATURAL GRADIENT OPTIMISATION
Many of the state-of-the-art trust region based methods, including TRPO (Schulman et al., 2015a) and ACKTR (Wu et al., 2017), use second order natural gradient based actor-critic optimisation (Amari, 1998; Kakade, 2002). The motivation behind it is to eliminate the issue that gradient descent loss, calculated as the Euclidean norm, is dependent on parametrisation. For this purpose, the Fisher information matrix is used, which is, as it follows from Amari (1998) and Kakade (2002), normalises per-parameter changes in the objective function. In the context of actor-critic optimisation it can be written as (Wu et al., 2017; Kakade, 2002), where p(τ) is the trajectory distribution p(s0) ∏T t=0 π(at|st)p(st+1|st, at):
F = Ep(τ) [ ∇θ log π(at|st) (∇θ log π(at|st))T ] . (7)
However, the computation of the Fisher matrix is intractable in practice due to the large number of parameters involved; therefore, there is a need to resort to approximations, such as the Kroneckerfactored approximate curvature (K-FAC) method (Martens & Grosse, 2015), which has been first proposed for ACKTR in (Wu et al., 2017). In the proposed method, as it is detailed in Algorithm 1, this optimisation method is used for optimising the policy.
3 METHOD DESCRIPTION
While the original trust regions optimisation method can only use the samples from the very last policy, discarding the potentially useful information from the previous ones, we make use of samples over several consecutive policies. The rest of the section contains definition of the proposed replay buffer concept adaptation, and then formulation and discussion of the proposed algorithm.
3.1 USAGE OF REPLAY BUFFERS
Mnih et al. (2013) suggested to use replay buffers for DQN to improve stability of learning, which then has been extended to other off-policy methods such as DDPG (Lillicrap et al., 2015). The concept has not been applied to on-policy methods like TRPO (Schulman et al., 2015a) or ACKTR (Wu et al., 2017), which do not use of previous data generated by other policies. Although based on trust regions optimisation, ACER (Wang et al., 2016) uses replay buffers for its off-policy part.
In this paper, we propose a different concept of the replay buffers, which combines the on-policy data with data from several previous policies, to avoid the restrictions of policy distribution stationarity for stochastic policy gradient (Sutton et al., 2000). Such replay buffers are used for storing simulations from several policies at the same time, which are then utilised in the method, built upon generalised value and advantage functions, accommodating data from these policies. The following definitions are necessary for the formalisation of the proposed algorithm and theorems.
We define a generalised Q-function for multiple policies {π1, . . . , πn, . . . , πN} as
Qπ(st, at) = EnEsnt+1,ant+1,... [ ∞∑ l=0 γlr(snt+l, a n t+l) ] , t ≥ 0, (8)
sn0 ∼ ρ0(sn0 ), snt+1 ∼ P (snt+1|snt , ant ), ant ∼ πn(ant |snt ). (9) We also define the generalised value function and the generalised advantage function as
V π(st) = EnEat,st+1,at+1 [ ∞∑ l=0 γlr(snt+l, a n t+l) ] , t ≥ 0, (10)
Aπn(st) = Q πn(st, at)− V π(st), t ≥ 0, (11)
To conform with the notation from Sutton et al. (2000), we define
ρ(π) = V π(s0), D πn(s) = ∞∑ k=0 γkP (s0 → s, k, πn), (12)
P (s→ x, k, π), as in Sutton et al. (2000), is the probability of transition from the state s to the state x in k steps using policy π. Theorem 1. For the set of policies {π1, . . . , πN} the following equality will be true for the gradient:
∂ρπ
∂θ = N∑ n=1 p(πn) ∫ s dsDπn(s) ∫ a da ∂πn(s, a) ∂θ [Qπn(s, a) + bπn(s)], (13)
where θ are the joint parameters of all policies {πn} and bπn(s) is a bias function for the policy.
The proof of Theorem 1 is given in Appendix B. Applying a particular case of the bias function bπn(s) = −V π(s) and using the likelihood ratio transformation, one can get
∂ρπ
∂θ = N∑ n=1 p(πn) ∫ s dsDπn(s) ∫ a daπn(s, a) ∂ log πn(s, a) ∂θ Aπn(s, a) (14)
3.2 ALGORITHM DESCRIPTION
The proposed approach is summarised in Algorithm 1. The replay buffer Rp contains data collected from several subsequent policies. The size of this buffer is RBP CAPACITY.
Algorithm 1 Trust Regions Algorithm with a Replay Buffer {Initialisation} Randomly initialise the weights θ of the policy estimator π(·) and Ψ of the value function estimator Ṽ (·). Initialise the policy replay buffer Rp = {}, set i = 0, δ = DELTA while i < MAX TIMESTEPS do {Stage 1} Collect ni data paths using the current policy π(·): P ={〈
(sj0, a j 0, r j 0), . . . , (s j kj , ajkj , r j kj
) 〉}ni
j=0 , increase i by the total number of timesteps in
all new paths. {Stage 2} Put recorded paths into the policy paths replay buffer Rp ← P . {Stage 3} For every path in Rp compute the targets for the value function regression using equation (15). Ψ = Update the value function estimator parameters {Stage 4} For every path in Rp, estimate the advantage function using Equation (23). {Stage 5} Update parameters of the policy θ for N ITER PL UPDATE iterations using the gradient from Equation (25) and a barrier function defined in Equation (26).
end while
During Stage 1, the data are collected for every path until the termination state is received, but at least TIMESTEPS PER BATCH steps in total for all paths. The policy actions are assumed to be sampled from the Gaussian distribution, with the mean values predicted by the policy estimator along with the covariance matrix diagonal. The covariance matrix output was inspired, although the idea is different, by the EPG paper (Ciosek & Whiteson, 2017).
At Stage 2, the obtained data for every policy are saved in the policy replay buffer Rp.
At Stage 3, the regression of the value function is trained using Adam optimiser (Kingma & Ba, 2015) with step size VF STEP SIZE for N ITER VF UPDATE iterations. For this regression, the sum-of-squares loss function is used. The value function target values are computed for every state st for every policy in the replay buffer using the actual sampled policy values, where tmax is the maximum policy step index:
V̂ (st) = tmax−t∑ l=0 γlr(st+l, at+l), (15)
During Stage 4, we perform the advantage function estimation.
Schulman et al. (2015b) proposed the Generalised Advantage Estimator for the advantage function Aπ(st, at) as follows:
Ãπ(st, at) = (1− λ)(Âπ,(1)t + λ π,(2) t + λ 2 π,(3) t + . . .), (16)
where  π,(1) t = −Ṽ π(st) + rt + γṼ π(st+1), . . . (17)
 π,(k) t = −Ṽ π(st) + rt + . . .+ γk−1rt+k−1 + γkṼ π(st+k), . . . (18)
Here k > 0 is a cut-off value, defined by the length of the sequence of occured states and actions within the MDP, λ ∈ [0, 1] is an estimator parameter, and Ṽ π(st) is the approximation for the value function Vπ(st), with the approximation targets defined in Equation (15). As proved in Schulman et al. (2015b), after rearrangement this would result in the generalised advantage function estimator
Ãπ(st, at) = k−1∑ l=0 (γλ)l(rt+l + γṼ π(st+l+1)− Ṽ π(st+l)), (19)
For the proposed advantage function (see Equation 11), the estimator could be defined similarly to Schulman et al. (2015b) as
Ãπn(st, at) = (1− λ)(Âπn,(1)t + λ πn,(2) t + λ 2 πn,(3) t + . . .), (20)
 πn,(1) t = −Ṽ π(st) + rt + γṼ πn(st+1), . . . (21)
 πn,(k) t = −Ṽ π(st) + rt + γrt+1 + . . .+ γk−1rt+k−1 + γkṼ πn(st+k). (22)
However, it would mean the estimation of multiple value functions, which diminishes the replay buffer idea. To avoid it, we modify this estimator for the proposed advantage function as
Ãπn(st, at) = k−1∑ l=0 (γλ)l(rt+l + γṼ π(st+l+1)− Ṽ π(st+l)), (23)
Theorem 2. The difference between the estimators (20) and (23) is
∆Ãπn(st, at) = γ(1− λ) k∑ l=1 λl−1γl−1(Ṽ πn(st+l)− Ṽ π(st+l)). (24)
The proof of Theorem 2 is given in Appendix C. It shows that the difference between two estimators is dependent of the difference in the conventional and the generalised value functions; given the continuous value function approximator it reveals that the closer are the policies, within a few trust regions radii, the smaller will be the bias.
During Stage 5, the policy function is approximated, using the K-FAC optimiser (Martens & Grosse, 2015) with the constant step size PL STEP SIZE. As one can see from the description, and differently from ACKTR, we do not use any adaptation of the trust region radius and/or optimisation algorithm parameters. Also, the output parameters include the diagonal of the (diagonal) policy covariance matrix. The elements of the covariance matrix, for the purpose of efficient optimisation, are restricted to universal minimum and maximum values MIN COV EL and MAX COV EL.
As an extention from Schulman et al. (2015b) and following Theorem 1 with the substitution of likelihood ratio, the policy gradient estimation is defined as
∇ρ(θ) ≈ EnEπn [ ∞∑ t=0 Ãπn(snt , a n t )∇θ log πn(ant |snt ) ] . (25)
To practically implement this gradient, we substitute the parameters θπ , derived from the latest policy for the replay buffer, instead of joint θ parameters assuming that the parameters would not deviate far from each other due to the trust region restrictions; it is still possible to calculate the estimation of Ãπn(snt , a n t ) for each policy using Equation (23) as these policies are observed. For the constrained optimisation we add the linear barrier function to the function ρ(θ):
ρb(θ) = ρ(θ)− α ·max(0, DKL(πθold , πθ)− δ), (26)
where α > 0 is a barrier function parameter and θold are the parameters of the policy on the previous iteration. Besides of removing the necessity of heuristical estimation of the optimisation parameters, it also conforms with the theoretical prepositions shown in Schulman et al. (2017) and, while our approach is proposed independently, pursues the similar ideas of using actual constrained optimisation method instead of changing the gradient step size parameters as per Schulman et al. (2015a).
The networks’ architectures correspond to OpenAI Baselines ACKTR implementation (Dhariwal et al., 2017) ,which has been implemented by the ACKTR authors (Wu et al., 2017). The only departure from the proposed architecture is the diagonal covariance matrix outputs, which are present, in addition to the mean output, in the policy network.
4 EXPERIMENTS
4.1 EXPERIMENTAL RESULTS
In order to provide the experimental evidence for the method, we have compared it with the on-policy ACKTR (Wu et al., 2017), PPO (Schulman et al., 2017) and TRPO (Schulman et al., 2015a) methods, as well as with the off-policy DDPG (Lillicrap et al., 2015) method on the MuJoCo (Todorov et al., 2012) robotic simulations. The technical implementation is described in Appendix A.
Figure 1 shows the total reward values and their standard deviations, averaged over every one hundred simulation steps over three randomised runs. The results show drastic improvements over the state-of-the-art methods, including the on-policy ones (ACKTR, TRPO, PPO), on most problems.
In contrast to those methods, the method shows that the adaptive values for trust region radius can be advantageously replaced by a fixed value in a combination with the trainable policy distribution covariance matrix, thus reducing the number of necessary hyperparameters. The results for ACKTR for the tasks HumanoidStandup, Striker and Thrower are not included as the baseline ACKTR implementation (Dhariwal et al., 2017) diverged at the first iterations with the predefined parameterisation. PPO results are obtained from baselines implementation PPO1 (Dhariwal et al., 2017).
Figure 2 compares results for different replay buffer sizes; the size of the replay buffers reflects the number of policies in it and not actions (i.e. buffer size 3 means data from three successive policies in the replay buffer). We see that in most of the cases, the use of replay buffers show performance improvement against those with replay buffer size 1 (i.e., no replay buffer with only the current policy used for policy gradient); substantial improvements can be seen for HumanoidStandup task.
Figure 3 shows the performance comparison with the DDPG method (Lillicrap et al., 2015). In all the tasks except HalfCheetah and Humanoid, the proposed method outperforms DDPG. For HalfCheetah, the versions with a replay buffer marginally overcomes the one without. It is also remarkable that the method demonstrates stable performance on the tasks HumanoidStandup, Pusher, Striker and Thrower, on which DDPG failed (and these tasks were not included into the DDPG article).
5 CONCLUSION
The paper combines replay buffers and on-policy data for reinforcement learning. Experimental results on various tasks from the MuJoCo suite (Todorov et al., 2012) show significant improvements compared to the state of the art. Moreover, we proposed a replacement of the heuristically calculated trust region parameters, to a single fixed hyperparameter, which also reduces the computational expences, and a trainable diagonal covariance matrix.
The proposed approach opens the door to using a combination of replay buffers and trust regions for reinforcement learning problems. While it is formulated for continuous tasks, it is possible to reuse the same ideas for discrete reinforcement learning tasks, such as ATARI games.
A TECHNICAL IMPLEMENTATION
The parameters of Algorithm 1, used in the experiment, are given in Table 1; the parameters were initially set, where possible, to the ones taken from the state-of-the-art trust region approach implementation (Wu et al., 2017; Dhariwal et al., 2017), and then some of them have been changed based on the experimental evidence. As the underlying numerical optimisation algorithms are out of the scope of the paper, the parameters of K-FAC optimiser from Dhariwal et al. (2017) have been used for the experiments; for the Adam algorithm (Kingma & Ba, 2015), the default parameters from Tensorflow (Abadi et al., 2016) implementation (β1 = 0.9, β2 = 0.999, = 1 · 10−8) have been used.
The method has been implemented in Python 3 using Tensorflow (Abadi et al., 2016) as an extension of the OpenAI baselines package (Dhariwal et al., 2017). The neural network for the control experiments consists of two fully connected layers, containing 64 neurons each, following the OpenAI ACKTR network implementation (Dhariwal et al., 2017).
B PROOF OF THEOREM 1
Proof. Extending the derivation from Sutton et al. (2000), one can see that:
∂V π(s)
∂θ
def =
∂
∂θ ∫ a
daπ(s, a)(Qπ(s, a) + bπ(s)) =∫ x dx ∞∑ k=0 γkP (s→ x, k, π) ∫ a da ∂π(x, a) ∂θ (Qπ(x, a) + bπ(x)) (27)
Then,
∂ρπ
∂θ = N∑ n=1 p(πn) ∂V πn(s0) ∂θ =
N∑ n=1 p(πn) ∫ s ds ∞∑ k=0 γkP (s0 → s, k, πn) ∫ a da ∂πn(s, a) ∂θ (Qπn(s, a) + bπn(s)) =
N∑ n=1 p(πn) ∫ s dsDπn(s) ∫ a da ∂πn(s, a) ∂θ (Qπn(s, a) + bπn(s)) (28)
C PROOF OF THEOREM 2
Proof. The difference between the two k-th estimators is given as
∆ πn,(k) t = γ k(Ṽ πn(st+k)− Ṽ π(st+k)︸ ︷︷ ︸ ∆V k ) (29)
By substituting this into the GAE estimator difference one can obtain
∆Ãπn(st, at) = (1− λ)(γ∆V 1 + λγ2∆V 2 + λ2γ3∆V 3 + . . .+ λk−1γk∆V k) =
γ(1− λ) k∑ l=1 λl−1γl−1∆V l. (30) | 1. What are the limitations of using replay buffers for storing simulations from multiple policies?
2. How does the proposed method handle off-policy learning?
3. What are the concerns regarding the experimental settings, particularly the small replay buffer size?
4. Can the authors provide clarification or additional information to address the reviewer's misunderstandings? | Review | Review
The paper tries to bring together the replay buffer and on-policy method. However, the reviewer found major flaws in such a method.
- Such replay buffers are used for storing simulations from several policies at the same time, which are then utilised in the method, built upon generalised value and advantage functions, accommodating data from these policies.
If the experience the policy is learning from is not generated by the same policy, that is off-policy learning.
In the experiment part, the replay buffer size is often very tiny, e.g., 3 or 5. The reviewer believes there may be something wrong in the experiment setting. Or if the reviewer understood it incorrectly, please clarify the reason behind such a tiny replay buffer. |
ICLR | Title
On-Policy Trust Region Policy Optimisation with Replay Buffers
Abstract
Building upon the recent success of deep reinforcement learning methods, we investigate the possibility of on-policy reinforcement learning improvement by reusing the data from several consecutive policies. On-policy methods bring many benefits, such as ability to evaluate each resulting policy. However, they usually discard all the information about the policies which existed before. In this work, we propose adaptation of the replay buffer concept, borrowed from the off-policy learning setting, to create the method, combining advantages of onand off-policy learning. To achieve this, the proposed algorithm generalises the Q-, value and advantage functions for data from multiple policies. The method uses trust region optimisation, while avoiding some of the common problems of the algorithms such as TRPO or ACKTR: it uses hyperparameters to replace the trust region selection heuristics, as well as the trainable covariance matrix instead of the fixed one. In many cases, the method not only improves the results comparing to the state-of-the-art trust region on-policy learning algorithms such as PPO, ACKTR and TRPO, but also with respect to their off-policy counterpart DDPG.
1 INTRODUCTION
The past few years have been marked by active development of reinforcement learning methods. Although the mathematical foundations of reinforcement learning have been known long before (Sutton & Barto, 1998), starting from 2013, the novel deep learning techniques allowed to solve vision based discrete control tasks such as Atari 2600 games (Mnih et al., 2013) as well as continuous control problems (Lillicrap et al., 2015; Mnih et al., 2016). Many of the leading state-of-the-art reinforcement learning methods share the actor-critic architecture (Crites & Barto, 1995). Actorcritic methods separate the actor, providing a policy, and the critic, providing an approximation for the expected discounted cumulative reward or some derived quantities such as advantage functions (Baird III, 1993). However, despite improvements, state-of-the-art reinforcement learning still suffers from poor sample efficiency and extensive parameterisation. For most real-world applications, in contrast to simulations, there is a need to learn in real time and over a limited training period, while minimising any risk that would cause damage to the actor or the environment.
Reinforcement learning algorithms can be divided into two groups: on-policy and off-policy learning. On-policy approaches (e. g., SARSA (Rummery & Niranjan, 1994), ACKTR (Wu et al., 2017)) evaluate the target policy by assuming that future actions will be chosen according to it, hence the exploration strategy must be incorporated as a part of the policy. Off-policy methods (e. g., Qlearning (Watkins, 1989), DDPG (Lillicrap et al., 2015)) separate the exploration strategy, which modifies the policy to explore different states, from the target policy.
The off-policy methods commonly use the concept of replay buffers to memorise the outcomes of the previous policies and therefore exploit the information accumulated through the previous iterations (Lin, 1993). Mnih et al. (2013) combined this experience replay mechanism with Deep Q-Networks (DQN), demonstrating end-to-end learning on Atari 2600 games. One limitation of DQN is that it can only operate on discrete action spaces. Lillicrap et al. (2015) proposed an extension of DQN to handle continuous action spaces based on the Deep Deterministic Policy Gradient (DDPG). There, exponential smoothing of the target actor and critic weights has been introduced to ensure stability of the rewards and critic predictions over the subsequent iterations. In order to improve the variance of policy gradients, Schulman et al. (2015b) proposed a Generalised Advantage Function. Mnih
et al. (2016) combined this advantage function learning with a parallelisation of exploration using differently trained actors in their Asynchronous Advantage Actor Critic model (A3C); however, Wang et al. (2016) demonstrated that such parallelisation may also have negative impact on sample efficiency. Although some work has been performed on improvement of exploratory strategies for reinforcement learning (Hester et al., 2013), but it still does not solve the fundamental restriction of inability to evaluate the actual policy, neither it removes the necessity to provide a separate exploratory strategy as a separate part of the method.
In contrast to those, state-of-the-art on-policy methods have many attractive properties: they are able to evaluate exactly the resulting policy with no need to provide a separate exploration strategy. However, they suffer from poor sample efficiency, to a larger extent than off-policy reinforcement learning. TRPO method (Schulman et al., 2015a) has introduced trust region policy optimisation to explicitly control the speed of policy evolution of Gaussian policies over time, expressed in a form of Kullback-Leibler divergence, during the training process. Nevertheless, the original TRPO method suffered from poor sample efficiency in comparison to off-policy methods such as DDPG. One way to solve this issue is by replacing the first order gradient descent methods, standard for deep learning, with second order natural gradient (Amari, 1998). Wu et al. (2017) used a Kroneckerfactored Approximate Curvature (K-FAC) optimiser (Martens & Grosse, 2015) in their ACKTR method. PPO method (Schulman et al., 2017) proposes a number of modifications to the TRPO scheme, including changing the objective function formulation and clipping the gradients. Wang et al. (2016) proposed another approach in their ACER algorithm: in this method, the target network is still maintained in the off-policy way, similar to DDPG (Lillicrap et al., 2015), while the trust region constraint is built upon the difference between the current and the target network.
Related to our approach, recently a group of methods has appeared in an attempt to get the benefits of both groups of methods. Gu et al. (2017) propose interpolated policy gradient, which uses the weighted sum of both stochastic (Sutton et al., 2000) and deterministic policy gradient (Silver et al., 2014). Nachum et al. (2018) propose an off-policy trust region method, Trust-PCL, which exploits off-policy data within the trust regions optimisation framework, while maintaining stability of optimisation by using relative entropy regularisation.
While it is a common practice to use replay buffers for the off-policy reinforcement learning, their existing concept is not used in combination with the existing on-policy scenarios, which results in discarding all policies but the last. Furthermore, many on-policy methods, such as TRPO (Schulman et al., 2015a), rely on stochastic policy gradient (Sutton et al., 2000), which is restricted by stationarity assumptions, in a contrast to those based on deterministic policy gradient (Silver et al., 2014), like DDPG (Lillicrap et al., 2015). In this article, we describe a novel reinforcement learning algorithm, allowing the joint use of replay buffers with trust region optimisation and leading to sample efficiency improvement. The contributions of the paper are given as follows:
1. a reinforcement learning method, enabling replay buffer concept along with on-policy data;
2. theoretical insights into the replay buffer usage within the on-policy setting are discussed;
3. we show that, unlike the state-of-the-art methods as ACKTR (Wu et al., 2017), PPO (Schulman et al., 2017) and TRPO (Schulman et al., 2015a), a single non-adaptive set of hyperparameters such as the trust region radius is sufficient for achieving better performance on a number of reinforcement learning tasks.
As we are committed to make sure the experiments in our paper are repeatable and to further ensure their acceptance by the community, we will release our source code shortly after the publication.
2 BACKGROUND
2.1 ACTOR-CRITIC REINFORCEMENT LEARNING
Consider an agent, interacting with the environment by responding to the states st, t ≥ 0, from the state space S, which are assumed to be also the observations, with actions at from the action space A chosen by the policy distribution πθ(·|st), where θ are the parameters of the policy. The initial state distribution is ρ0 : S → R. Every time the agent produces an action, the environment gives back a reward r(st, at) ∈ R, which serves as a feedback on how good the action choice was and
switches to the next state st+1 according to the transitional probability P (st+1|st, at). Altogether, it can be formalised as an infinite horizon γ-discounted Markov Decision Process (S,A, P, r, ρ0, γ), γ ∈ [0, 1) (Wu et al., 2017; Schulman et al., 2015a). The expected discounted return (Bellman, 1957) is defined as per Schulman et al. (2015a):
ρ(π) = Es0,a0,··· [ ∞∑ t=0 γtr(st, at) ] (1)
The advantage function Aπ (Baird III, 1993), the value function V π and the Q-function Qπ are defined as per Mnih et al. (2016); Schulman et al. (2015a):
Aπ(s, a) = Qπ(s, a)− V π(s), (2)
Qπ(st, at) = Est+1,at+1,... [ ∞∑ l=0 γlr(st+l, at+l) ] , t ≥ 0, (3)
V π(st) = Eat,st+1,... [ ∞∑ l=0 γlr(st+l, at+l) ] , t ≥ 0 (4)
In all above definitions s0 ∼ ρ0(s0), at ∼ π(at|st), st+1 ∼ P (st+1|st, at), and the policy π = πθ is defined by its parameters θ.
2.2 TRUST REGION POLICY OPTIMISATION (TRPO)
A straightforward approach for learning a policy is to perform unconstrained maximisation ρ(πθ) with respect to the policy parameters θ. However, for the state-of-the-art iterative gradient-based optimisation methods, this approach would lead to unpredictable and uncontrolled changes in the policy, which would impede efficient exploration. Furthermore, in practice the exact values of ρ(πθ) are unknown, and the quality of its estimates depends on approximators which tend to be correct only in the vicinity of parameters of observed policies.
Schulman et al. (2015a), based on theorems by Kakade (2002), prove the minorisation-maximisation (MM) algorithm (Hunter & Lange, 2004) for policy parameters optimisation. Schulman et al. (2015a) mention that in practice the algorithm’s convergence rate and the complexity of maximum KL divergence computations makes it impractical to apply this method directly. Therefore, they proposed to replace the unconstrained optimisation with a similar constrained optimisation problem, the Trust Region Policy Optimisation (TRPO) problem:
arg max θ ρ(πθ) (5) DKL(πθold , πθ) ≤ δ, (6) where DKL is the KL divergence between the old and the new policy πθold and πθ respectively, and δ is the trust region radius. Despite this improvement, it needs some further enhancements to solve this problem efficiently, as we will elaborate in the next section.
2.3 SECOND ORDER ACTOR-CRITIC NATURAL GRADIENT OPTIMISATION
Many of the state-of-the-art trust region based methods, including TRPO (Schulman et al., 2015a) and ACKTR (Wu et al., 2017), use second order natural gradient based actor-critic optimisation (Amari, 1998; Kakade, 2002). The motivation behind it is to eliminate the issue that gradient descent loss, calculated as the Euclidean norm, is dependent on parametrisation. For this purpose, the Fisher information matrix is used, which is, as it follows from Amari (1998) and Kakade (2002), normalises per-parameter changes in the objective function. In the context of actor-critic optimisation it can be written as (Wu et al., 2017; Kakade, 2002), where p(τ) is the trajectory distribution p(s0) ∏T t=0 π(at|st)p(st+1|st, at):
F = Ep(τ) [ ∇θ log π(at|st) (∇θ log π(at|st))T ] . (7)
However, the computation of the Fisher matrix is intractable in practice due to the large number of parameters involved; therefore, there is a need to resort to approximations, such as the Kroneckerfactored approximate curvature (K-FAC) method (Martens & Grosse, 2015), which has been first proposed for ACKTR in (Wu et al., 2017). In the proposed method, as it is detailed in Algorithm 1, this optimisation method is used for optimising the policy.
3 METHOD DESCRIPTION
While the original trust regions optimisation method can only use the samples from the very last policy, discarding the potentially useful information from the previous ones, we make use of samples over several consecutive policies. The rest of the section contains definition of the proposed replay buffer concept adaptation, and then formulation and discussion of the proposed algorithm.
3.1 USAGE OF REPLAY BUFFERS
Mnih et al. (2013) suggested to use replay buffers for DQN to improve stability of learning, which then has been extended to other off-policy methods such as DDPG (Lillicrap et al., 2015). The concept has not been applied to on-policy methods like TRPO (Schulman et al., 2015a) or ACKTR (Wu et al., 2017), which do not use of previous data generated by other policies. Although based on trust regions optimisation, ACER (Wang et al., 2016) uses replay buffers for its off-policy part.
In this paper, we propose a different concept of the replay buffers, which combines the on-policy data with data from several previous policies, to avoid the restrictions of policy distribution stationarity for stochastic policy gradient (Sutton et al., 2000). Such replay buffers are used for storing simulations from several policies at the same time, which are then utilised in the method, built upon generalised value and advantage functions, accommodating data from these policies. The following definitions are necessary for the formalisation of the proposed algorithm and theorems.
We define a generalised Q-function for multiple policies {π1, . . . , πn, . . . , πN} as
Qπ(st, at) = EnEsnt+1,ant+1,... [ ∞∑ l=0 γlr(snt+l, a n t+l) ] , t ≥ 0, (8)
sn0 ∼ ρ0(sn0 ), snt+1 ∼ P (snt+1|snt , ant ), ant ∼ πn(ant |snt ). (9) We also define the generalised value function and the generalised advantage function as
V π(st) = EnEat,st+1,at+1 [ ∞∑ l=0 γlr(snt+l, a n t+l) ] , t ≥ 0, (10)
Aπn(st) = Q πn(st, at)− V π(st), t ≥ 0, (11)
To conform with the notation from Sutton et al. (2000), we define
ρ(π) = V π(s0), D πn(s) = ∞∑ k=0 γkP (s0 → s, k, πn), (12)
P (s→ x, k, π), as in Sutton et al. (2000), is the probability of transition from the state s to the state x in k steps using policy π. Theorem 1. For the set of policies {π1, . . . , πN} the following equality will be true for the gradient:
∂ρπ
∂θ = N∑ n=1 p(πn) ∫ s dsDπn(s) ∫ a da ∂πn(s, a) ∂θ [Qπn(s, a) + bπn(s)], (13)
where θ are the joint parameters of all policies {πn} and bπn(s) is a bias function for the policy.
The proof of Theorem 1 is given in Appendix B. Applying a particular case of the bias function bπn(s) = −V π(s) and using the likelihood ratio transformation, one can get
∂ρπ
∂θ = N∑ n=1 p(πn) ∫ s dsDπn(s) ∫ a daπn(s, a) ∂ log πn(s, a) ∂θ Aπn(s, a) (14)
3.2 ALGORITHM DESCRIPTION
The proposed approach is summarised in Algorithm 1. The replay buffer Rp contains data collected from several subsequent policies. The size of this buffer is RBP CAPACITY.
Algorithm 1 Trust Regions Algorithm with a Replay Buffer {Initialisation} Randomly initialise the weights θ of the policy estimator π(·) and Ψ of the value function estimator Ṽ (·). Initialise the policy replay buffer Rp = {}, set i = 0, δ = DELTA while i < MAX TIMESTEPS do {Stage 1} Collect ni data paths using the current policy π(·): P ={〈
(sj0, a j 0, r j 0), . . . , (s j kj , ajkj , r j kj
) 〉}ni
j=0 , increase i by the total number of timesteps in
all new paths. {Stage 2} Put recorded paths into the policy paths replay buffer Rp ← P . {Stage 3} For every path in Rp compute the targets for the value function regression using equation (15). Ψ = Update the value function estimator parameters {Stage 4} For every path in Rp, estimate the advantage function using Equation (23). {Stage 5} Update parameters of the policy θ for N ITER PL UPDATE iterations using the gradient from Equation (25) and a barrier function defined in Equation (26).
end while
During Stage 1, the data are collected for every path until the termination state is received, but at least TIMESTEPS PER BATCH steps in total for all paths. The policy actions are assumed to be sampled from the Gaussian distribution, with the mean values predicted by the policy estimator along with the covariance matrix diagonal. The covariance matrix output was inspired, although the idea is different, by the EPG paper (Ciosek & Whiteson, 2017).
At Stage 2, the obtained data for every policy are saved in the policy replay buffer Rp.
At Stage 3, the regression of the value function is trained using Adam optimiser (Kingma & Ba, 2015) with step size VF STEP SIZE for N ITER VF UPDATE iterations. For this regression, the sum-of-squares loss function is used. The value function target values are computed for every state st for every policy in the replay buffer using the actual sampled policy values, where tmax is the maximum policy step index:
V̂ (st) = tmax−t∑ l=0 γlr(st+l, at+l), (15)
During Stage 4, we perform the advantage function estimation.
Schulman et al. (2015b) proposed the Generalised Advantage Estimator for the advantage function Aπ(st, at) as follows:
Ãπ(st, at) = (1− λ)(Âπ,(1)t + λ π,(2) t + λ 2 π,(3) t + . . .), (16)
where  π,(1) t = −Ṽ π(st) + rt + γṼ π(st+1), . . . (17)
 π,(k) t = −Ṽ π(st) + rt + . . .+ γk−1rt+k−1 + γkṼ π(st+k), . . . (18)
Here k > 0 is a cut-off value, defined by the length of the sequence of occured states and actions within the MDP, λ ∈ [0, 1] is an estimator parameter, and Ṽ π(st) is the approximation for the value function Vπ(st), with the approximation targets defined in Equation (15). As proved in Schulman et al. (2015b), after rearrangement this would result in the generalised advantage function estimator
Ãπ(st, at) = k−1∑ l=0 (γλ)l(rt+l + γṼ π(st+l+1)− Ṽ π(st+l)), (19)
For the proposed advantage function (see Equation 11), the estimator could be defined similarly to Schulman et al. (2015b) as
Ãπn(st, at) = (1− λ)(Âπn,(1)t + λ πn,(2) t + λ 2 πn,(3) t + . . .), (20)
 πn,(1) t = −Ṽ π(st) + rt + γṼ πn(st+1), . . . (21)
 πn,(k) t = −Ṽ π(st) + rt + γrt+1 + . . .+ γk−1rt+k−1 + γkṼ πn(st+k). (22)
However, it would mean the estimation of multiple value functions, which diminishes the replay buffer idea. To avoid it, we modify this estimator for the proposed advantage function as
Ãπn(st, at) = k−1∑ l=0 (γλ)l(rt+l + γṼ π(st+l+1)− Ṽ π(st+l)), (23)
Theorem 2. The difference between the estimators (20) and (23) is
∆Ãπn(st, at) = γ(1− λ) k∑ l=1 λl−1γl−1(Ṽ πn(st+l)− Ṽ π(st+l)). (24)
The proof of Theorem 2 is given in Appendix C. It shows that the difference between two estimators is dependent of the difference in the conventional and the generalised value functions; given the continuous value function approximator it reveals that the closer are the policies, within a few trust regions radii, the smaller will be the bias.
During Stage 5, the policy function is approximated, using the K-FAC optimiser (Martens & Grosse, 2015) with the constant step size PL STEP SIZE. As one can see from the description, and differently from ACKTR, we do not use any adaptation of the trust region radius and/or optimisation algorithm parameters. Also, the output parameters include the diagonal of the (diagonal) policy covariance matrix. The elements of the covariance matrix, for the purpose of efficient optimisation, are restricted to universal minimum and maximum values MIN COV EL and MAX COV EL.
As an extention from Schulman et al. (2015b) and following Theorem 1 with the substitution of likelihood ratio, the policy gradient estimation is defined as
∇ρ(θ) ≈ EnEπn [ ∞∑ t=0 Ãπn(snt , a n t )∇θ log πn(ant |snt ) ] . (25)
To practically implement this gradient, we substitute the parameters θπ , derived from the latest policy for the replay buffer, instead of joint θ parameters assuming that the parameters would not deviate far from each other due to the trust region restrictions; it is still possible to calculate the estimation of Ãπn(snt , a n t ) for each policy using Equation (23) as these policies are observed. For the constrained optimisation we add the linear barrier function to the function ρ(θ):
ρb(θ) = ρ(θ)− α ·max(0, DKL(πθold , πθ)− δ), (26)
where α > 0 is a barrier function parameter and θold are the parameters of the policy on the previous iteration. Besides of removing the necessity of heuristical estimation of the optimisation parameters, it also conforms with the theoretical prepositions shown in Schulman et al. (2017) and, while our approach is proposed independently, pursues the similar ideas of using actual constrained optimisation method instead of changing the gradient step size parameters as per Schulman et al. (2015a).
The networks’ architectures correspond to OpenAI Baselines ACKTR implementation (Dhariwal et al., 2017) ,which has been implemented by the ACKTR authors (Wu et al., 2017). The only departure from the proposed architecture is the diagonal covariance matrix outputs, which are present, in addition to the mean output, in the policy network.
4 EXPERIMENTS
4.1 EXPERIMENTAL RESULTS
In order to provide the experimental evidence for the method, we have compared it with the on-policy ACKTR (Wu et al., 2017), PPO (Schulman et al., 2017) and TRPO (Schulman et al., 2015a) methods, as well as with the off-policy DDPG (Lillicrap et al., 2015) method on the MuJoCo (Todorov et al., 2012) robotic simulations. The technical implementation is described in Appendix A.
Figure 1 shows the total reward values and their standard deviations, averaged over every one hundred simulation steps over three randomised runs. The results show drastic improvements over the state-of-the-art methods, including the on-policy ones (ACKTR, TRPO, PPO), on most problems.
In contrast to those methods, the method shows that the adaptive values for trust region radius can be advantageously replaced by a fixed value in a combination with the trainable policy distribution covariance matrix, thus reducing the number of necessary hyperparameters. The results for ACKTR for the tasks HumanoidStandup, Striker and Thrower are not included as the baseline ACKTR implementation (Dhariwal et al., 2017) diverged at the first iterations with the predefined parameterisation. PPO results are obtained from baselines implementation PPO1 (Dhariwal et al., 2017).
Figure 2 compares results for different replay buffer sizes; the size of the replay buffers reflects the number of policies in it and not actions (i.e. buffer size 3 means data from three successive policies in the replay buffer). We see that in most of the cases, the use of replay buffers show performance improvement against those with replay buffer size 1 (i.e., no replay buffer with only the current policy used for policy gradient); substantial improvements can be seen for HumanoidStandup task.
Figure 3 shows the performance comparison with the DDPG method (Lillicrap et al., 2015). In all the tasks except HalfCheetah and Humanoid, the proposed method outperforms DDPG. For HalfCheetah, the versions with a replay buffer marginally overcomes the one without. It is also remarkable that the method demonstrates stable performance on the tasks HumanoidStandup, Pusher, Striker and Thrower, on which DDPG failed (and these tasks were not included into the DDPG article).
5 CONCLUSION
The paper combines replay buffers and on-policy data for reinforcement learning. Experimental results on various tasks from the MuJoCo suite (Todorov et al., 2012) show significant improvements compared to the state of the art. Moreover, we proposed a replacement of the heuristically calculated trust region parameters, to a single fixed hyperparameter, which also reduces the computational expences, and a trainable diagonal covariance matrix.
The proposed approach opens the door to using a combination of replay buffers and trust regions for reinforcement learning problems. While it is formulated for continuous tasks, it is possible to reuse the same ideas for discrete reinforcement learning tasks, such as ATARI games.
A TECHNICAL IMPLEMENTATION
The parameters of Algorithm 1, used in the experiment, are given in Table 1; the parameters were initially set, where possible, to the ones taken from the state-of-the-art trust region approach implementation (Wu et al., 2017; Dhariwal et al., 2017), and then some of them have been changed based on the experimental evidence. As the underlying numerical optimisation algorithms are out of the scope of the paper, the parameters of K-FAC optimiser from Dhariwal et al. (2017) have been used for the experiments; for the Adam algorithm (Kingma & Ba, 2015), the default parameters from Tensorflow (Abadi et al., 2016) implementation (β1 = 0.9, β2 = 0.999, = 1 · 10−8) have been used.
The method has been implemented in Python 3 using Tensorflow (Abadi et al., 2016) as an extension of the OpenAI baselines package (Dhariwal et al., 2017). The neural network for the control experiments consists of two fully connected layers, containing 64 neurons each, following the OpenAI ACKTR network implementation (Dhariwal et al., 2017).
B PROOF OF THEOREM 1
Proof. Extending the derivation from Sutton et al. (2000), one can see that:
∂V π(s)
∂θ
def =
∂
∂θ ∫ a
daπ(s, a)(Qπ(s, a) + bπ(s)) =∫ x dx ∞∑ k=0 γkP (s→ x, k, π) ∫ a da ∂π(x, a) ∂θ (Qπ(x, a) + bπ(x)) (27)
Then,
∂ρπ
∂θ = N∑ n=1 p(πn) ∂V πn(s0) ∂θ =
N∑ n=1 p(πn) ∫ s ds ∞∑ k=0 γkP (s0 → s, k, πn) ∫ a da ∂πn(s, a) ∂θ (Qπn(s, a) + bπn(s)) =
N∑ n=1 p(πn) ∫ s dsDπn(s) ∫ a da ∂πn(s, a) ∂θ (Qπn(s, a) + bπn(s)) (28)
C PROOF OF THEOREM 2
Proof. The difference between the two k-th estimators is given as
∆ πn,(k) t = γ k(Ṽ πn(st+k)− Ṽ π(st+k)︸ ︷︷ ︸ ∆V k ) (29)
By substituting this into the GAE estimator difference one can obtain
∆Ãπn(st, at) = (1− λ)(γ∆V 1 + λγ2∆V 2 + λ2γ3∆V 3 + . . .+ λk−1γk∆V k) =
γ(1− λ) k∑ l=1 λl−1γl−1∆V l. (30) | 1. What is the main contribution of the paper regarding off-policy methods for TRPO?
2. What are the strengths of the proposed approach, particularly in its extension of the Q, value, and advantage functions?
3. Do you have any concerns regarding the necessity of defining these generalized notions of the Q, value, and advantage functions?
4. How does the reviewer assess the novelty and significance of introducing a learnable diagonal covariance matrix?
5. What are the weaknesses of the paper regarding its experimental comparisons and lack of references to prior works?
6. Minor suggestions:
* Provide clearer explanations of how the different components interact in section 2.1.
* Add more references to prior works that introduced the expected return, advantages, Q-, and value functions.
* Correct the references for the figures in the experiments part. | Review | Review
The authors introduce a off-policy method for TRPO by suggesting to use replay buffers to store trajectories and sample from them during training. To do this they extend the definition of the Q function to multiple policies where the Q_pi bar is then the expectation over the several policies. They propose the same for the value function and consequently the advantage function.
In my opinion this is some interesting work, but there are some details that are not clear to me, so i have several questions.
1. Why is it necessary to define these generalized notions of the Q, Value and Advantage functions? You motivate this by the fact the samples stored in the replay buffer will be generated by different policies, i.e. by differently parametrized policies at a certain time step. But this also holds almost all algorithms using replay buffers. Could you plese explain this part further?
2. In eq. (26) you introduce the parameter alpha as a sort of Lagrange multiplier to turn the unconstrained optimization problem defined by TRPO into a constrained one. This is was also proposed early by Schulman et al. in Proximal Policy Optimization. Yet, it is not cited or referenced. In the discussion of the experimental results go further into this. Please explain this part in more detail.
3. Another point of your work is the learnable diagonal covariance matrix. How can you be sure that the improvements you show are due to the replay buffers and not due to learning these? Or learning covariance in combination with the penalty term alpha?
4. Can you provide comparative results for PPO? PPO outperforms DDP and TRPO on most tasks so it would be interessting to see
5. How many trajectory samples do you store in the replay buffers? Can you provide results where you use your method but without any replay buffers, i.e. by using the last batch of data points?
Minor Suggestions:
- The references for the figures in the Experiments part are off. In fig. 1 you cite Todorov et al. for Mujoco but not TRPO and ACKTR, the same in fig. 2. Then in fig. 3 you cite DDPG also with Todorov et al.
- Some parts of the text is a bit unorganized. In section 2.1 you introduce AC algorithms and on the next page you give the definitions for all components but you don't say anything about how the interact. Also, the definition of the expected return was not "invented" by Schulman et al, and neither were Advantages, Q-, and Value functions. Maybe add a second or third reference. |
ICLR | Title
On-Policy Trust Region Policy Optimisation with Replay Buffers
Abstract
Building upon the recent success of deep reinforcement learning methods, we investigate the possibility of on-policy reinforcement learning improvement by reusing the data from several consecutive policies. On-policy methods bring many benefits, such as ability to evaluate each resulting policy. However, they usually discard all the information about the policies which existed before. In this work, we propose adaptation of the replay buffer concept, borrowed from the off-policy learning setting, to create the method, combining advantages of onand off-policy learning. To achieve this, the proposed algorithm generalises the Q-, value and advantage functions for data from multiple policies. The method uses trust region optimisation, while avoiding some of the common problems of the algorithms such as TRPO or ACKTR: it uses hyperparameters to replace the trust region selection heuristics, as well as the trainable covariance matrix instead of the fixed one. In many cases, the method not only improves the results comparing to the state-of-the-art trust region on-policy learning algorithms such as PPO, ACKTR and TRPO, but also with respect to their off-policy counterpart DDPG.
1 INTRODUCTION
The past few years have been marked by active development of reinforcement learning methods. Although the mathematical foundations of reinforcement learning have been known long before (Sutton & Barto, 1998), starting from 2013, the novel deep learning techniques allowed to solve vision based discrete control tasks such as Atari 2600 games (Mnih et al., 2013) as well as continuous control problems (Lillicrap et al., 2015; Mnih et al., 2016). Many of the leading state-of-the-art reinforcement learning methods share the actor-critic architecture (Crites & Barto, 1995). Actorcritic methods separate the actor, providing a policy, and the critic, providing an approximation for the expected discounted cumulative reward or some derived quantities such as advantage functions (Baird III, 1993). However, despite improvements, state-of-the-art reinforcement learning still suffers from poor sample efficiency and extensive parameterisation. For most real-world applications, in contrast to simulations, there is a need to learn in real time and over a limited training period, while minimising any risk that would cause damage to the actor or the environment.
Reinforcement learning algorithms can be divided into two groups: on-policy and off-policy learning. On-policy approaches (e. g., SARSA (Rummery & Niranjan, 1994), ACKTR (Wu et al., 2017)) evaluate the target policy by assuming that future actions will be chosen according to it, hence the exploration strategy must be incorporated as a part of the policy. Off-policy methods (e. g., Qlearning (Watkins, 1989), DDPG (Lillicrap et al., 2015)) separate the exploration strategy, which modifies the policy to explore different states, from the target policy.
The off-policy methods commonly use the concept of replay buffers to memorise the outcomes of the previous policies and therefore exploit the information accumulated through the previous iterations (Lin, 1993). Mnih et al. (2013) combined this experience replay mechanism with Deep Q-Networks (DQN), demonstrating end-to-end learning on Atari 2600 games. One limitation of DQN is that it can only operate on discrete action spaces. Lillicrap et al. (2015) proposed an extension of DQN to handle continuous action spaces based on the Deep Deterministic Policy Gradient (DDPG). There, exponential smoothing of the target actor and critic weights has been introduced to ensure stability of the rewards and critic predictions over the subsequent iterations. In order to improve the variance of policy gradients, Schulman et al. (2015b) proposed a Generalised Advantage Function. Mnih
et al. (2016) combined this advantage function learning with a parallelisation of exploration using differently trained actors in their Asynchronous Advantage Actor Critic model (A3C); however, Wang et al. (2016) demonstrated that such parallelisation may also have negative impact on sample efficiency. Although some work has been performed on improvement of exploratory strategies for reinforcement learning (Hester et al., 2013), but it still does not solve the fundamental restriction of inability to evaluate the actual policy, neither it removes the necessity to provide a separate exploratory strategy as a separate part of the method.
In contrast to those, state-of-the-art on-policy methods have many attractive properties: they are able to evaluate exactly the resulting policy with no need to provide a separate exploration strategy. However, they suffer from poor sample efficiency, to a larger extent than off-policy reinforcement learning. TRPO method (Schulman et al., 2015a) has introduced trust region policy optimisation to explicitly control the speed of policy evolution of Gaussian policies over time, expressed in a form of Kullback-Leibler divergence, during the training process. Nevertheless, the original TRPO method suffered from poor sample efficiency in comparison to off-policy methods such as DDPG. One way to solve this issue is by replacing the first order gradient descent methods, standard for deep learning, with second order natural gradient (Amari, 1998). Wu et al. (2017) used a Kroneckerfactored Approximate Curvature (K-FAC) optimiser (Martens & Grosse, 2015) in their ACKTR method. PPO method (Schulman et al., 2017) proposes a number of modifications to the TRPO scheme, including changing the objective function formulation and clipping the gradients. Wang et al. (2016) proposed another approach in their ACER algorithm: in this method, the target network is still maintained in the off-policy way, similar to DDPG (Lillicrap et al., 2015), while the trust region constraint is built upon the difference between the current and the target network.
Related to our approach, recently a group of methods has appeared in an attempt to get the benefits of both groups of methods. Gu et al. (2017) propose interpolated policy gradient, which uses the weighted sum of both stochastic (Sutton et al., 2000) and deterministic policy gradient (Silver et al., 2014). Nachum et al. (2018) propose an off-policy trust region method, Trust-PCL, which exploits off-policy data within the trust regions optimisation framework, while maintaining stability of optimisation by using relative entropy regularisation.
While it is a common practice to use replay buffers for the off-policy reinforcement learning, their existing concept is not used in combination with the existing on-policy scenarios, which results in discarding all policies but the last. Furthermore, many on-policy methods, such as TRPO (Schulman et al., 2015a), rely on stochastic policy gradient (Sutton et al., 2000), which is restricted by stationarity assumptions, in a contrast to those based on deterministic policy gradient (Silver et al., 2014), like DDPG (Lillicrap et al., 2015). In this article, we describe a novel reinforcement learning algorithm, allowing the joint use of replay buffers with trust region optimisation and leading to sample efficiency improvement. The contributions of the paper are given as follows:
1. a reinforcement learning method, enabling replay buffer concept along with on-policy data;
2. theoretical insights into the replay buffer usage within the on-policy setting are discussed;
3. we show that, unlike the state-of-the-art methods as ACKTR (Wu et al., 2017), PPO (Schulman et al., 2017) and TRPO (Schulman et al., 2015a), a single non-adaptive set of hyperparameters such as the trust region radius is sufficient for achieving better performance on a number of reinforcement learning tasks.
As we are committed to make sure the experiments in our paper are repeatable and to further ensure their acceptance by the community, we will release our source code shortly after the publication.
2 BACKGROUND
2.1 ACTOR-CRITIC REINFORCEMENT LEARNING
Consider an agent, interacting with the environment by responding to the states st, t ≥ 0, from the state space S, which are assumed to be also the observations, with actions at from the action space A chosen by the policy distribution πθ(·|st), where θ are the parameters of the policy. The initial state distribution is ρ0 : S → R. Every time the agent produces an action, the environment gives back a reward r(st, at) ∈ R, which serves as a feedback on how good the action choice was and
switches to the next state st+1 according to the transitional probability P (st+1|st, at). Altogether, it can be formalised as an infinite horizon γ-discounted Markov Decision Process (S,A, P, r, ρ0, γ), γ ∈ [0, 1) (Wu et al., 2017; Schulman et al., 2015a). The expected discounted return (Bellman, 1957) is defined as per Schulman et al. (2015a):
ρ(π) = Es0,a0,··· [ ∞∑ t=0 γtr(st, at) ] (1)
The advantage function Aπ (Baird III, 1993), the value function V π and the Q-function Qπ are defined as per Mnih et al. (2016); Schulman et al. (2015a):
Aπ(s, a) = Qπ(s, a)− V π(s), (2)
Qπ(st, at) = Est+1,at+1,... [ ∞∑ l=0 γlr(st+l, at+l) ] , t ≥ 0, (3)
V π(st) = Eat,st+1,... [ ∞∑ l=0 γlr(st+l, at+l) ] , t ≥ 0 (4)
In all above definitions s0 ∼ ρ0(s0), at ∼ π(at|st), st+1 ∼ P (st+1|st, at), and the policy π = πθ is defined by its parameters θ.
2.2 TRUST REGION POLICY OPTIMISATION (TRPO)
A straightforward approach for learning a policy is to perform unconstrained maximisation ρ(πθ) with respect to the policy parameters θ. However, for the state-of-the-art iterative gradient-based optimisation methods, this approach would lead to unpredictable and uncontrolled changes in the policy, which would impede efficient exploration. Furthermore, in practice the exact values of ρ(πθ) are unknown, and the quality of its estimates depends on approximators which tend to be correct only in the vicinity of parameters of observed policies.
Schulman et al. (2015a), based on theorems by Kakade (2002), prove the minorisation-maximisation (MM) algorithm (Hunter & Lange, 2004) for policy parameters optimisation. Schulman et al. (2015a) mention that in practice the algorithm’s convergence rate and the complexity of maximum KL divergence computations makes it impractical to apply this method directly. Therefore, they proposed to replace the unconstrained optimisation with a similar constrained optimisation problem, the Trust Region Policy Optimisation (TRPO) problem:
arg max θ ρ(πθ) (5) DKL(πθold , πθ) ≤ δ, (6) where DKL is the KL divergence between the old and the new policy πθold and πθ respectively, and δ is the trust region radius. Despite this improvement, it needs some further enhancements to solve this problem efficiently, as we will elaborate in the next section.
2.3 SECOND ORDER ACTOR-CRITIC NATURAL GRADIENT OPTIMISATION
Many of the state-of-the-art trust region based methods, including TRPO (Schulman et al., 2015a) and ACKTR (Wu et al., 2017), use second order natural gradient based actor-critic optimisation (Amari, 1998; Kakade, 2002). The motivation behind it is to eliminate the issue that gradient descent loss, calculated as the Euclidean norm, is dependent on parametrisation. For this purpose, the Fisher information matrix is used, which is, as it follows from Amari (1998) and Kakade (2002), normalises per-parameter changes in the objective function. In the context of actor-critic optimisation it can be written as (Wu et al., 2017; Kakade, 2002), where p(τ) is the trajectory distribution p(s0) ∏T t=0 π(at|st)p(st+1|st, at):
F = Ep(τ) [ ∇θ log π(at|st) (∇θ log π(at|st))T ] . (7)
However, the computation of the Fisher matrix is intractable in practice due to the large number of parameters involved; therefore, there is a need to resort to approximations, such as the Kroneckerfactored approximate curvature (K-FAC) method (Martens & Grosse, 2015), which has been first proposed for ACKTR in (Wu et al., 2017). In the proposed method, as it is detailed in Algorithm 1, this optimisation method is used for optimising the policy.
3 METHOD DESCRIPTION
While the original trust regions optimisation method can only use the samples from the very last policy, discarding the potentially useful information from the previous ones, we make use of samples over several consecutive policies. The rest of the section contains definition of the proposed replay buffer concept adaptation, and then formulation and discussion of the proposed algorithm.
3.1 USAGE OF REPLAY BUFFERS
Mnih et al. (2013) suggested to use replay buffers for DQN to improve stability of learning, which then has been extended to other off-policy methods such as DDPG (Lillicrap et al., 2015). The concept has not been applied to on-policy methods like TRPO (Schulman et al., 2015a) or ACKTR (Wu et al., 2017), which do not use of previous data generated by other policies. Although based on trust regions optimisation, ACER (Wang et al., 2016) uses replay buffers for its off-policy part.
In this paper, we propose a different concept of the replay buffers, which combines the on-policy data with data from several previous policies, to avoid the restrictions of policy distribution stationarity for stochastic policy gradient (Sutton et al., 2000). Such replay buffers are used for storing simulations from several policies at the same time, which are then utilised in the method, built upon generalised value and advantage functions, accommodating data from these policies. The following definitions are necessary for the formalisation of the proposed algorithm and theorems.
We define a generalised Q-function for multiple policies {π1, . . . , πn, . . . , πN} as
Qπ(st, at) = EnEsnt+1,ant+1,... [ ∞∑ l=0 γlr(snt+l, a n t+l) ] , t ≥ 0, (8)
sn0 ∼ ρ0(sn0 ), snt+1 ∼ P (snt+1|snt , ant ), ant ∼ πn(ant |snt ). (9) We also define the generalised value function and the generalised advantage function as
V π(st) = EnEat,st+1,at+1 [ ∞∑ l=0 γlr(snt+l, a n t+l) ] , t ≥ 0, (10)
Aπn(st) = Q πn(st, at)− V π(st), t ≥ 0, (11)
To conform with the notation from Sutton et al. (2000), we define
ρ(π) = V π(s0), D πn(s) = ∞∑ k=0 γkP (s0 → s, k, πn), (12)
P (s→ x, k, π), as in Sutton et al. (2000), is the probability of transition from the state s to the state x in k steps using policy π. Theorem 1. For the set of policies {π1, . . . , πN} the following equality will be true for the gradient:
∂ρπ
∂θ = N∑ n=1 p(πn) ∫ s dsDπn(s) ∫ a da ∂πn(s, a) ∂θ [Qπn(s, a) + bπn(s)], (13)
where θ are the joint parameters of all policies {πn} and bπn(s) is a bias function for the policy.
The proof of Theorem 1 is given in Appendix B. Applying a particular case of the bias function bπn(s) = −V π(s) and using the likelihood ratio transformation, one can get
∂ρπ
∂θ = N∑ n=1 p(πn) ∫ s dsDπn(s) ∫ a daπn(s, a) ∂ log πn(s, a) ∂θ Aπn(s, a) (14)
3.2 ALGORITHM DESCRIPTION
The proposed approach is summarised in Algorithm 1. The replay buffer Rp contains data collected from several subsequent policies. The size of this buffer is RBP CAPACITY.
Algorithm 1 Trust Regions Algorithm with a Replay Buffer {Initialisation} Randomly initialise the weights θ of the policy estimator π(·) and Ψ of the value function estimator Ṽ (·). Initialise the policy replay buffer Rp = {}, set i = 0, δ = DELTA while i < MAX TIMESTEPS do {Stage 1} Collect ni data paths using the current policy π(·): P ={〈
(sj0, a j 0, r j 0), . . . , (s j kj , ajkj , r j kj
) 〉}ni
j=0 , increase i by the total number of timesteps in
all new paths. {Stage 2} Put recorded paths into the policy paths replay buffer Rp ← P . {Stage 3} For every path in Rp compute the targets for the value function regression using equation (15). Ψ = Update the value function estimator parameters {Stage 4} For every path in Rp, estimate the advantage function using Equation (23). {Stage 5} Update parameters of the policy θ for N ITER PL UPDATE iterations using the gradient from Equation (25) and a barrier function defined in Equation (26).
end while
During Stage 1, the data are collected for every path until the termination state is received, but at least TIMESTEPS PER BATCH steps in total for all paths. The policy actions are assumed to be sampled from the Gaussian distribution, with the mean values predicted by the policy estimator along with the covariance matrix diagonal. The covariance matrix output was inspired, although the idea is different, by the EPG paper (Ciosek & Whiteson, 2017).
At Stage 2, the obtained data for every policy are saved in the policy replay buffer Rp.
At Stage 3, the regression of the value function is trained using Adam optimiser (Kingma & Ba, 2015) with step size VF STEP SIZE for N ITER VF UPDATE iterations. For this regression, the sum-of-squares loss function is used. The value function target values are computed for every state st for every policy in the replay buffer using the actual sampled policy values, where tmax is the maximum policy step index:
V̂ (st) = tmax−t∑ l=0 γlr(st+l, at+l), (15)
During Stage 4, we perform the advantage function estimation.
Schulman et al. (2015b) proposed the Generalised Advantage Estimator for the advantage function Aπ(st, at) as follows:
Ãπ(st, at) = (1− λ)(Âπ,(1)t + λ π,(2) t + λ 2 π,(3) t + . . .), (16)
where  π,(1) t = −Ṽ π(st) + rt + γṼ π(st+1), . . . (17)
 π,(k) t = −Ṽ π(st) + rt + . . .+ γk−1rt+k−1 + γkṼ π(st+k), . . . (18)
Here k > 0 is a cut-off value, defined by the length of the sequence of occured states and actions within the MDP, λ ∈ [0, 1] is an estimator parameter, and Ṽ π(st) is the approximation for the value function Vπ(st), with the approximation targets defined in Equation (15). As proved in Schulman et al. (2015b), after rearrangement this would result in the generalised advantage function estimator
Ãπ(st, at) = k−1∑ l=0 (γλ)l(rt+l + γṼ π(st+l+1)− Ṽ π(st+l)), (19)
For the proposed advantage function (see Equation 11), the estimator could be defined similarly to Schulman et al. (2015b) as
Ãπn(st, at) = (1− λ)(Âπn,(1)t + λ πn,(2) t + λ 2 πn,(3) t + . . .), (20)
 πn,(1) t = −Ṽ π(st) + rt + γṼ πn(st+1), . . . (21)
 πn,(k) t = −Ṽ π(st) + rt + γrt+1 + . . .+ γk−1rt+k−1 + γkṼ πn(st+k). (22)
However, it would mean the estimation of multiple value functions, which diminishes the replay buffer idea. To avoid it, we modify this estimator for the proposed advantage function as
Ãπn(st, at) = k−1∑ l=0 (γλ)l(rt+l + γṼ π(st+l+1)− Ṽ π(st+l)), (23)
Theorem 2. The difference between the estimators (20) and (23) is
∆Ãπn(st, at) = γ(1− λ) k∑ l=1 λl−1γl−1(Ṽ πn(st+l)− Ṽ π(st+l)). (24)
The proof of Theorem 2 is given in Appendix C. It shows that the difference between two estimators is dependent of the difference in the conventional and the generalised value functions; given the continuous value function approximator it reveals that the closer are the policies, within a few trust regions radii, the smaller will be the bias.
During Stage 5, the policy function is approximated, using the K-FAC optimiser (Martens & Grosse, 2015) with the constant step size PL STEP SIZE. As one can see from the description, and differently from ACKTR, we do not use any adaptation of the trust region radius and/or optimisation algorithm parameters. Also, the output parameters include the diagonal of the (diagonal) policy covariance matrix. The elements of the covariance matrix, for the purpose of efficient optimisation, are restricted to universal minimum and maximum values MIN COV EL and MAX COV EL.
As an extention from Schulman et al. (2015b) and following Theorem 1 with the substitution of likelihood ratio, the policy gradient estimation is defined as
∇ρ(θ) ≈ EnEπn [ ∞∑ t=0 Ãπn(snt , a n t )∇θ log πn(ant |snt ) ] . (25)
To practically implement this gradient, we substitute the parameters θπ , derived from the latest policy for the replay buffer, instead of joint θ parameters assuming that the parameters would not deviate far from each other due to the trust region restrictions; it is still possible to calculate the estimation of Ãπn(snt , a n t ) for each policy using Equation (23) as these policies are observed. For the constrained optimisation we add the linear barrier function to the function ρ(θ):
ρb(θ) = ρ(θ)− α ·max(0, DKL(πθold , πθ)− δ), (26)
where α > 0 is a barrier function parameter and θold are the parameters of the policy on the previous iteration. Besides of removing the necessity of heuristical estimation of the optimisation parameters, it also conforms with the theoretical prepositions shown in Schulman et al. (2017) and, while our approach is proposed independently, pursues the similar ideas of using actual constrained optimisation method instead of changing the gradient step size parameters as per Schulman et al. (2015a).
The networks’ architectures correspond to OpenAI Baselines ACKTR implementation (Dhariwal et al., 2017) ,which has been implemented by the ACKTR authors (Wu et al., 2017). The only departure from the proposed architecture is the diagonal covariance matrix outputs, which are present, in addition to the mean output, in the policy network.
4 EXPERIMENTS
4.1 EXPERIMENTAL RESULTS
In order to provide the experimental evidence for the method, we have compared it with the on-policy ACKTR (Wu et al., 2017), PPO (Schulman et al., 2017) and TRPO (Schulman et al., 2015a) methods, as well as with the off-policy DDPG (Lillicrap et al., 2015) method on the MuJoCo (Todorov et al., 2012) robotic simulations. The technical implementation is described in Appendix A.
Figure 1 shows the total reward values and their standard deviations, averaged over every one hundred simulation steps over three randomised runs. The results show drastic improvements over the state-of-the-art methods, including the on-policy ones (ACKTR, TRPO, PPO), on most problems.
In contrast to those methods, the method shows that the adaptive values for trust region radius can be advantageously replaced by a fixed value in a combination with the trainable policy distribution covariance matrix, thus reducing the number of necessary hyperparameters. The results for ACKTR for the tasks HumanoidStandup, Striker and Thrower are not included as the baseline ACKTR implementation (Dhariwal et al., 2017) diverged at the first iterations with the predefined parameterisation. PPO results are obtained from baselines implementation PPO1 (Dhariwal et al., 2017).
Figure 2 compares results for different replay buffer sizes; the size of the replay buffers reflects the number of policies in it and not actions (i.e. buffer size 3 means data from three successive policies in the replay buffer). We see that in most of the cases, the use of replay buffers show performance improvement against those with replay buffer size 1 (i.e., no replay buffer with only the current policy used for policy gradient); substantial improvements can be seen for HumanoidStandup task.
Figure 3 shows the performance comparison with the DDPG method (Lillicrap et al., 2015). In all the tasks except HalfCheetah and Humanoid, the proposed method outperforms DDPG. For HalfCheetah, the versions with a replay buffer marginally overcomes the one without. It is also remarkable that the method demonstrates stable performance on the tasks HumanoidStandup, Pusher, Striker and Thrower, on which DDPG failed (and these tasks were not included into the DDPG article).
5 CONCLUSION
The paper combines replay buffers and on-policy data for reinforcement learning. Experimental results on various tasks from the MuJoCo suite (Todorov et al., 2012) show significant improvements compared to the state of the art. Moreover, we proposed a replacement of the heuristically calculated trust region parameters, to a single fixed hyperparameter, which also reduces the computational expences, and a trainable diagonal covariance matrix.
The proposed approach opens the door to using a combination of replay buffers and trust regions for reinforcement learning problems. While it is formulated for continuous tasks, it is possible to reuse the same ideas for discrete reinforcement learning tasks, such as ATARI games.
A TECHNICAL IMPLEMENTATION
The parameters of Algorithm 1, used in the experiment, are given in Table 1; the parameters were initially set, where possible, to the ones taken from the state-of-the-art trust region approach implementation (Wu et al., 2017; Dhariwal et al., 2017), and then some of them have been changed based on the experimental evidence. As the underlying numerical optimisation algorithms are out of the scope of the paper, the parameters of K-FAC optimiser from Dhariwal et al. (2017) have been used for the experiments; for the Adam algorithm (Kingma & Ba, 2015), the default parameters from Tensorflow (Abadi et al., 2016) implementation (β1 = 0.9, β2 = 0.999, = 1 · 10−8) have been used.
The method has been implemented in Python 3 using Tensorflow (Abadi et al., 2016) as an extension of the OpenAI baselines package (Dhariwal et al., 2017). The neural network for the control experiments consists of two fully connected layers, containing 64 neurons each, following the OpenAI ACKTR network implementation (Dhariwal et al., 2017).
B PROOF OF THEOREM 1
Proof. Extending the derivation from Sutton et al. (2000), one can see that:
∂V π(s)
∂θ
def =
∂
∂θ ∫ a
daπ(s, a)(Qπ(s, a) + bπ(s)) =∫ x dx ∞∑ k=0 γkP (s→ x, k, π) ∫ a da ∂π(x, a) ∂θ (Qπ(x, a) + bπ(x)) (27)
Then,
∂ρπ
∂θ = N∑ n=1 p(πn) ∂V πn(s0) ∂θ =
N∑ n=1 p(πn) ∫ s ds ∞∑ k=0 γkP (s0 → s, k, πn) ∫ a da ∂πn(s, a) ∂θ (Qπn(s, a) + bπn(s)) =
N∑ n=1 p(πn) ∫ s dsDπn(s) ∫ a da ∂πn(s, a) ∂θ (Qπn(s, a) + bπn(s)) (28)
C PROOF OF THEOREM 2
Proof. The difference between the two k-th estimators is given as
∆ πn,(k) t = γ k(Ṽ πn(st+k)− Ṽ π(st+k)︸ ︷︷ ︸ ∆V k ) (29)
By substituting this into the GAE estimator difference one can obtain
∆Ãπn(st, at) = (1− λ)(γ∆V 1 + λγ2∆V 2 + λ2γ3∆V 3 + . . .+ λk−1γk∆V k) =
γ(1− λ) k∑ l=1 λl−1γl−1∆V l. (30) | 1. What is the focus of the paper regarding integrating replay buffers and on-policy trust region policy optimization?
2. What are the strengths and weaknesses of the proposed method in addressing the problem?
3. Do you have any questions regarding the theoretical analysis, particularly in understanding the advantage function and barrier function?
4. How does the reviewer assess the effectiveness of the proposed algorithm compared to other methods such as PPO or Trust PCL?
5. What are the limitations of the paper, especially in addressing distribution mismatching issues and the use of K-FAC instead of CG? | Review | Review
In this paper, the authors present how to integrate replay buffer and on-policy trust region policy optimization (TRPO) by generalizing Q/V/advantage function and then empirically show the proposed method outperforms TRPO/DDPG.
The generalization of advantage function is quite interesting and is well written. One minor issue is that d^{\pi_n} (s) is confusing since it appears after ds.
The theory in Section 3.1 makes sense. However, due to the limitation in Theorem 1 that $\theta$ is the joint parameters, applying Theorem 1 can be difficult. In Eq (25), what is the $\theta$ here? And what does $\nabla_\theta \pi_n$ mean? Does $\pi_n$ uses $\theta$ for computation? One of the problems of using replay buffers in on-policy algorithms is that the stationary distribution of states changes as policy changes, and at least the writing doesn't make it clear on how to solve distribution mismatching issue. Further explanation on Eq (25) might help. If the distributions of states are assumed to match, then the joint distribution of states and actions may mismatch so additional importance sampling might help, as suggested in [1] Eq (3).
Another problem is on the barrier function. In Eq (26), if we only evaluate $\rho_b(\theta)$ (or its gradient w.r.t. $\theta$) at the point $\theta_old$, it doesn't differ with or without the barrier function. So in order to show the barrier function helps, we must evaluate $\rho_b(\theta)$ (or its gradient) at a point $\theta \neq \theta_old$. As far as I know, the underlying optimizer, K-FAC, just evaluates the objective's (i.e., $\rho_b$) gradients at $\theta_old$. Both Conjugate Gradient (CG), which TRPO uses, and K-FAC are trying to solve $F^{-1} g$ where $g$ is the gradient of the objective at the current point.
The experiments show significant improvement over TRPO/DDPG. However, some experiments are also expected.
1. How is the proposed algorithm compared to PPO or Trust PCL?
2. How does the barrier function help? More importantly, what's the comparison of the barrier function to [1] Eq (5)?
The proposed algorithm seems more like a variant of ACKTR instead of TRPO since line search is missing in the proposed algorithm and the underlying optimizer is K-FAC instead of CG.
Ref:
[1]: Proximal Policy Optimization Algorithms, by John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, Oleg Klimov. |
ICLR | Title
Few-shot Learning via Dirichlet Tessellation Ensemble
Abstract
Few-shot learning (FSL) is the process of rapid generalization from abundant base samples to inadequate novel samples. Despite extensive research in recent years, FSL is still not yet able to generate satisfactory solutions for a wide range of real-world applications. To confront this challenge, we study the FSL problem from a geometric point of view in this paper. One observation is that the widely embraced ProtoNet model is essentially a Voronoi Diagram (VD) in the feature space. We retrofit it by making use of a recent advance in computational geometry called Cluster-induced Voronoi Diagram (CIVD). Starting from the simplest nearest neighbor model, CIVD gradually incorporates cluster-to-point and then cluster-to-cluster relationships for space subdivision, which is used to improve the accuracy and robustness at multiple stages of FSL. Specifically, we use CIVD (1) to integrate parametric and nonparametric few-shot classifiers; (2) to combine feature representation and surrogate representation; (3) and to leverage feature-level, transformation-level, and geometry-level heterogeneities for a better ensemble. Our CIVD-based workflow enables us to achieve new state-of-the-art results on mini-ImageNet, CUB, and tiered-ImagenNet datasets, with ∼2%−5% improvements upon the next best. To summarize, CIVD provides a mathematically elegant and geometrically interpretable framework that compensates for extreme data insufficiency, prevents overfitting, and allows for fast geometric ensemble for thousands of individual VD. These together make FSL stronger.
1 INTRODUCTION
Recent years have witnessed a tremendous success of deep learning in a number of data-intensive applications; one critical reason for which is the vast collection of hand-annotated high-quality data, such as the millions of natural images for visual object recognition (Deng et al., 2009). However, in many real-world applications, such large-scale data acquisition might be difficult and comes at a premium, such as in rare disease diagnosis (Yoo et al., 2021) and drug discovery (Ma et al., 2021b; 2018). As a consequence, Few-shot Learning (FSL) has recently drawn growing interests (Wang et al., 2020).
Generally, few-shot learning algorithms can be categorized into two types, namely inductive and transductive, depending on whether estimating the distribution of query samples is allowed. A typical transductive FSL algorithm learns to propagate labels among a larger pool of query samples in a semi-supervised manner (Liu et al., 2019); notwithstanding its normally higher performance, in many real world scenarios a query sample (e.g. patient) also comes individually and is unique, for instance, in personalized pharmacogenomics (Sharifi-Noghabi et al., 2020). Thus, we in this paper adhere to the inductive setting and make on-the-fly prediction for each newly seen sample.
Few-shot learning is challenging and substantially different from conventional deep learning, and has been tackled by many researchers from a wide variety of angles. Despite the extensive research
All four authors are corresponding authors.
on the algorithmic aspects of FSL (see Sec. 2), two challenges still pose an obstacle to successful FSL: (1) how to sufficiently compensate for the data deficiency in FSL? and (2) how to make the most use of the base samples and the pre-trained model?
For the first question, data augmentation has been a successful approach to expand the size of data, either by Generative Adversarial Networks (GANs) (Goodfellow et al., 2014) (Li et al., 2020b; Zhang et al., 2018) or by variational autoencoders (VAEs) (Kingma & Welling, 2014) (Zhang et al., 2019; Chen et al., 2019b). However, in each way, the authenticity of either the augmented data or the feature is not guaranteed, and the out-of-distribution hallucinated samples (Ma et al., 2019) may hinder the subsequent FSL. Recently, Liu et al. (2020b) and Ni et al. (2021) investigate supportlevel, query-level, task-level, and shot-level augmentation for meta-learning, but the diversity of FSL models has not been taken into consideration. For the second question, Yang et al. (2021) borrows the top-2 nearest base classes for each novel sample to calibrate its distribution and to generate more novel samples. However, when there is no proximal base class, this calibration may utterly alter the distribution. Another line of work (Sbai et al., 2020; Zhou et al., 2020) learns to select and design base classes for a better discrimination on novel classes, which all introduce extra training burden. As a matter of fact, we still lack a method that makes full use of the base classes and the pretrained model effectively.
In this paper, we study the FSL problem from a geometric point of view. In metric-based FSL, despite being surprisingly simple, the nearest neighbor-like approaches, e.g. ProtoNet (Snell et al., 2017) and SimpleShot (Wang et al., 2019), have achieved remarkable performance that is even better than many sophisticatedly designed methods. Geometrically, what a nearest neighbor-based method does, under the hood, is to partition the feature space into a Voronoi Diagram (VD) that is induced by the feature centroids of the novel classes. Although it is highly efficient and simple, Voronoi Diagrams coarsely draw the decision boundary by linear bisectors separating two centers, and may lack the ability to subtly delineate the geometric structure arises in FSL.
To resolve this issue, we adopt a novel technique called Cluster-induced Voronoi Diagram (CIVD) (Chen et al., 2013; 2017; Huang & Xu, 2020; Huang et al., 2021), which is a recent breakthrough in computation geometry. CIVD generalizes VD from a point-to-point distance-based diagram to a cluster-to-point influence-based structure. It enables us to determine the dominating
region (or Voronoi cell) not only for a point (e.g. a class prototype) but also for a cluster of points, guaranteed to have a (1 + )-approximation with a nearly linear size of diagram for a wide range of locally dominating influence functions. CIVD provides us a mathematically elegant framework to depict the feature space and draw the decision boundary more precisely than VD without losing the resistance to overfitting.
Accordingly, in this paper, we show how CIVD is used to improve multiple stages of FSL and make several contributions as follows.
1. We first categorize different types of few-shot classifiers as different variants of Voronoi Diagram: nearest neighbor model as Voronoi Diagram, linear classifier as Power Diagram, and cosine classifier as spherical Voronoi Diagram (Table 1). We then unify them via CIVD that enjoys the advantages of multiple models, either parametric or nonparametric (denoted as DeepVoro--).
2. Going from cluster-to-point to cluster-to-cluster influence, we further propose Cluster-to-cluster Voronoi Diagram (CCVD), as a natural extension of CIVD. Based on CCVD, we present DeepVoro which enables fast geometric ensemble of a large pool of thousands of configurations for FSL.
3. Instead of using base classes for distribution calibration and data augmentation (Yang et al., 2021), we propose a novel surrogate representation, the collection of similarities to base classes, and thus promote DeepVoro to DeepVoro++ that integrates feature-level, transformation-level, and geometry-level heterogeneities in FSL.
Extensive experiments have shown that, although a fixed feature extractor is used without independently pretrained or epoch-wise models, our method achieves new state-of-the-art results on all
three benchmark datasets including mini-ImageNet, CUB, and tiered-ImageNet, and improves by up to 2.18% on 5-shot classification, 2.53% on 1-shot classification, and up to 5.55% with different network architectures.
2 RELATED WORK
Few-Shot Learning. There are a number of different lines of research dedicated to FSL. (1) Metricbased methods employ a certain distance function (cosine distance (Mangla et al., 2020; Xu et al., 2021), Euclidean distance (Wang et al., 2019; Snell et al., 2017), or Earth Mover’s Distance (Zhang et al., 2020a;b)) to bypass the optimization and avoid possible overfitting. (2) Optimization-based approaches (Finn et al., 2017) manages to learn a good model initialization that accelerates the optimization in the meta-testing stage. (3) Self-supervised-based (Zhang et al., 2021b; Mangla et al., 2020) methods incorporate supervision from data itself to learn a robuster feature extractor. (4) Ensemble method is another powerful technique that boosting the performance by integrating multiple models (Ma et al., 2021a). For example, Dvornik et al. (2019) trains several networks simultaneously and encourages robustness and cooperation among them. However, due to the high computation load of training deep models, this ensemble is restricted by the number of networks which is typically <20. In Liu et al. (2020c), instead, the ensemble consists of models learned at each epoch, which, may potentially limit the diversity of ensemble members.
Geometric Understanding of Deep Learning. The geometric structure of deep neural networks is first hinted at by Raghu et al. (2017) who reveals that piecewise linear activations subdivide input space into convex polytopes. Then, Balestriero et al. (2019) points out that the exact structure is a Power Diagram (Aurenhammer, 1987) which is subsequently applied upon recurrent neural network (Wang et al., 2018) and generative model (Balestriero et al., 2020). The Power/Voronoi Diagram subdivision, however, is not necessarily the optimal model for describing feature space. Recently, Chen et al. (2013; 2017); Huang et al. (2021) uses an influence function F (C, z) to measure the joint influence of all objects in C on a query z to build a Cluster-induced Voronoi Diagram (CIVD). In this paper, we utilize CIVD to magnify the expressivity of geometric modeling for FSL.
3 METHODOLOGY
3.1 PRELIMINARIES
Few-shot learning aims at discriminating between novel classes Cnovel with the aid of a larger amount of samples from base classes Cbase, Cnovel∩Cbase = ∅. The whole learning process usually follows the
meta-learning scheme. Formally, given a dataset of base classes D = {(xi, yi)},xi ∈ D, yi ∈ Cbase with D being an arbitrary domain e.g. natural image, a deep neural network z = φ(x), z ∈ Rn, which maps from image domain D to feature domain Rn, is trained using standard gradient descent algorithm, and after which φ is fixed as a feature extractor. This process is referred to as metatraining stage that squeezes out the commonsense knowledge from D. For a fair evaluation of the learning performance on a few samples, the meta-testing stage is typically formulated as a series of K-way N -shot tasks (episodes) {T }. Each such episode is further decomposed into a support set S = {(xi, yi)}K×Ni=1 , yi ∈ CT and a query set Q = {(xi, yi)} K×Q i=1 , yi ∈ CT , in which the episode classes CT is a randomly sampled subset of Cnovel with cardinality K, and each class contains onlyN andQ random samples in the support set and query set, respectively. For few-shot classification, we introduce here two widely used schemes as follows. For simplicity, all samples here are from S and Q, without data augmentation applied. Nearest Neighbor Classifier (Nonparametric). In Snell et al. (2017); Wang et al. (2019) etc., a prototype ck is acquired by averaging over all supporting features for a class k ∈ CT :
ck = 1
N
∑ x∈S,y=k φ(x) (1)
Then each query sample x ∈ Q is classified by finding the nearest prototype: ŷ = arg minkd(z, ck) = ||z − ck||22, in which we use Euclidean distance for distance metric d. Linear Classifier (Parametric). Another scheme uses a linear classifier with cross-entropy loss optimized on the supporting samples:
L(W , b) = ∑ (x,y)∈S − log p(y|φ(x);W , b) = ∑
(x,y)∈S − log exp(W Ty φ(x) + by)∑ k exp(W T k φ(x) + bk) (2)
in which Wk, bk are the linear weight and bias for class k, and the predicted class for query x ∈ Q is ŷ = arg maxk p(y|z;Wk, bk).
3.2 FEW-SHOT LEARNING AS CLUSTER-INDUCED VORONOI DIAGRAMS
In this section, we first introduce the basic concepts of Voronoi Tessellations, and then show how parametric/nonparametric classifier heads can be unified by VD.
Definition 3.1 (Power Diagram and Voronoi Diagram). Let Ω = {ω1, ..., ωK} be a partition of the space Rn, and C = {c1, ..., cK} be a set of centers such that ∪Kr=1ωr = Rn,∩Kr=1ωr = ∅. Additionally, each center is associated with a weight νr ∈ {ν1, ..., νK} ⊆ R+. Then the set of pairs {(ω1, c1, ν1), ..., (ωK , cL, νK)} is a Power Diagram (PD), where each cell is obtained via ωr = {z ∈ Rn : r(z) = r}, r ∈ {1, ..,K}, with
r(z) = arg min k∈{1,...,K}
d(z, ck) 2 − νk. (3)
If the weights are equal for all k, i.e. νk = νk′ ,∀k, k′ ∈ {1, ...,K}, then a PD collapses to a Voronoi Diagram (VD).
By definition, it is easy to see that the nearest neighbor classifier naturally partitions the space into K cells with centers {c1, ..., cK}. Here we show that the linear classifier is also a VD under a mild condition.
Theorem 3.1 (Voronoi Diagram Reduction). The linear classifier parameterized by W , b partitions the input space Rn to a Voronoi Diagram with centers {c̃1, ..., c̃K} given by c̃k = 12Wk if bk = − 14 ||Wk|| 2 2, k = 1, ...,K.
Proof. See Appendix B for details.
3.2.1 FROM VORONOI DIAGRAM TO CLUSTER-INDUCED VORONOI DIAGRAM
Now that both nearest neighbor and linear classifier have been unified by VD, a natural idea is to integrate them together. Cluster-induced Voronoi Diagram (CIVD) (Chen et al., 2017; Huang et al.,
2021) is a generalization of VD which allows multiple centers in a cell, and is successfully used for clinical diagnosis from biomedical images (Wang et al., 2015), providing an ideal tool for the integration of parametric/nonparametric classifier for FSL. Formally: Definition 3.2 (Cluster-induced Voronoi Diagram (CIVD) (Chen et al., 2017; Huang et al., 2021)). Let Ω = {ω1, ..., ωK} be a partition of the space Rn, and C = {C1, ..., CK} be a set (possibly a multiset) of clusters. The set of pairs {(ω1, C1), ..., (ωK , CK)} is a Cluster-induced Voronoi Diagram (CIVD) with respect to the influence function F (Ck, z), where each cell is obtained via ωr = {z ∈ Rn : r(z) = r}, r ∈ {1, ..,K}, with
r(z) = arg max k∈{1,...,K}
F (Ck, z). (4)
Here C can be either a given set of clusters or even the whole power set of a given point set, and the influence function is defined as a function over the collection of distances from each member in a cluster Ck to a query point z: Definition 3.3 (Influence Function). The influence from Ck, k ∈ {1, ...,K} to z /∈ Ck is F (Ck, z) = F ({d(c(i)k , z)|c (i) k ∈ Ck} |Ck| i=1). In this paper F is assumed to have the following form
F (Ck, z) = − sign(α) ∑|Ck| i=0 d(c (i) k , z) α. (5)
The sign function here makes sure that F is a monotonically decreasing function with respect to distance d. The hyperparameter α controls the magnitude of the influence, for example, in gravity force α = −(n− 1) in n-dimensional space and in electric force α = −2. Since the nearest neighbor centers {ck}Kk=1 and the centers introduced by linear classifier {c̃k}Kk=1 are obtained from different schemes and could both be informative, we merge the corresponding centers for a novel class k to be a new cluster Ck = {ck, c̃k}, and use the resulting C = {C1, ..., CK} to establish a CIVD. In such a way, the final partition may enjoy the advantages of both parametric and nonparametric classifier heads. We name this approach as DeepVoro--.
3.3 FEW-SHOT CLASSIFICATION VIA SURROGATE REPRESENTATION
In nearest neighbor classifier head, the distance from a query feature z to each of the prototypes {ck}Kk=1 is the key discrimination criterion for classification. We rewrite {d(z, ck)}Kk=1 to be a vector d ∈ RK such that dk = d(z, ck). These distances are acquired by measure the distance between two points in high dimension: z, ck ∈ Rn. However, the notorious behavior of high dimension is that the ratio between the nearest and farthest points in a point set P approaches 1 (Aggarwal et al., 2001), making {d(z, ck)}Kk=1 less discriminative for classification, especially for FSL problem with sample size N ·K n. Hence, in this paper, we seek for a surrogate representation. In human perception and learning system, similarity among familiar and unfamiliar objects play a key role for object categorization and classification (Murray et al., 2002), and it has been experimentally verified by functional magnetic resonance imaging (fMRI) that a large region in occipitotemporal cortex processes the shape of both meaningful and unfamiliar objects (Op de Beeck et al., 2008). In our method, a connection will be built between each unfamiliar novel class in Cnovel and each related well-perceived familiar class in Cbase. So the first step is to identify the most relevant base classes for a specific task T . Concretely: Definition 3.4 (Surrogate Classes). In episode T , given the set of prototypes {ck}Kk=1 for the support set S and the set of prototypes {c′t} |Cbase| t=1 for the base set D, the surrogate classes for episode classes CT is given as:
Csurrogate(T ) = K⋃ k=1 Top-R t∈{1,...,|Cbase|} d(ck, c ′ t) (6)
in which the top-R function returnsR base class indices with smallest distances to ck, and the center for a base class t is given as c′t = 1 |{(x,y)|x∈D,y=t}| ∑ x∈D,y=tφ(x). Here R is a hyperparameter.
The rationale behind this selection instead of simply using the whole base classes Cbase is that, the episode classes CT are only overlapped with a portion of base classes (Zhang et al., 2021a), and
discriminative similarities are likely to be overwhelmed by the background signal especially when the number of base classes is large. After the surrogate classes are found, we re-index their feature centers to be {c′j}R̃j=1, R̃ ≤ R · K. Then, both support centers {ck}Kk=1 and query feature z are represented by the collection of similarities to these surrogate centers:
d′k = (d(ck, c ′ 1), ..., d(ck, c ′ R̃ )), k = 1, ...,K
d′ = (d(z, c′1), ..., d(z, c ′ R̃
)) (7)
where d′k,d ′ ∈ RR̃ are the surrogate representation for novel class k and query feature z, respectively. By surrogate representation, the prediction is found through ŷ = arg minkd(d ′,d′k) = arg mink||d′ − d′k||22. This set of discriminative distances is rewritten as d′′ ∈ RK such that d′′k = d(d
′,d′k). An illustration of the surrogate representation is shown in Figure 1 on MultiDigitMNIST, a demonstrative dataset.
Integrating Feature Representation and Surrogate Representation. Until now, we have two discriminative systems, i.e., feature-based d ∈ RK and surrogate-based d′′ ∈ RK . A natural idea is to combine them to form the following final criterion:
d̃ = β d
||d||1 + γ
d′′
||d′′||1 , (8)
where d and d′′ are normalized by their Manhattan norm, ||d||1 = ∑K k=1dk and ||d′′||1 = ∑K k=1d ′′ k , respectively, and β and γ are two hyperparameters adjusting the weights for feature representation and surrogate representation.
3.4 DEEPVORO: INTEGRATING MULTI-LEVEL HETEROGENEITY OF FSL
In this section we present DeepVoro, a fast geometric ensemble framework that unites our contributions to multiple stages of FSL, and show how it can be promoted to DeepVoro++ by incorporating surrogate representation.
Compositional Feature Transformation. It is believed that FSL algorithms favor features with more Gaussian-like distributions, and thus various kinds of transformations are used to improve the normality of feature distribution, including power transformation (Hu et al., 2021), Tukey’s Ladder of Powers Transformation (Yang et al., 2021), and L2 normalization (Wang et al., 2019). While these transformations are normally used independently, here we propose to combine several transformations sequentially in order to enlarge the expressivity of transformation function and to increase the polymorphism of the FSL process. Specifically, for a feature vector z, three kinds of transformations are considered: (I) L2 Normalization. By projection onto the unit sphere in Rn, the feature is normalized as: f(z) = z||z||2 . (II) Linear Transformation. Now since all the features are located on the unit sphere, we then can do scaling and shifting via a linear transformation: gw,b(z) = wz + b. (III) Tukey’s Ladder of Powers Transformation. Finally, Tukey’s Ladder of Powers Transformation
is applied on the feature: hλ(z) = { zλ if λ 6= 0 log(z) if λ = 0 . By the composition of L2 normalization, linear transformation, and Tukey’s Ladder of Powers Transformation, now the transformation function becomes (hλ ◦ gw,b ◦ f)(z) parameterized by w, b, λ. Multi-level Heterogeneities in FSL. Now we are ready to articulate the hierarchical heterogeneity existing in different stages of FSL. (I) Feature-level Heterogeneity: Data augmentation has been exhaustively explored for expanding the data size of FSL (Ni et al., 2021), including but not limited to rotation, flipping, cropping, erasing, solarization, color jitter, MixUp (Zhang et al., 2017), etc. The modification of image x will change the position of feature z in the feature space. We denote all possible translations of image as a set of functions {T}. (II) Transformation-level Heterogeneity: After obtaining the feature z, a parameterized transformation is applied to it, and the resulting features can be quite different for these parameters (see Figure F.1). We denote the set of all possible transformations to be {Pw,b,λ}. (III) Geometry-level Heterogeneity: Even with the provided feature, the few-shot classification model can still be diverse: whether a VD or PD-based model is used, whether the feature or the surrogate representation is adopted, and the setting of R will also change the degree of locality. We denote all possible models as {M}.
DeepVoro for Fast Geometric Ensemble of VDs. With the above three-layer heterogeneity, the FSL process can be encapsulated as (M◦Pw,b,λ◦φ◦T )(x), and all possible configurations of a given episode T with a fixed φ is the Cartesian product of these three sets: {T}×{Pw,b,λ}×{M}. Indeed, when a hold-out validation dataset is available, it can be used to find the optimal combination, but by virtue of ensemble learning, multiple models can still contribute positively to FSL (Dvornik et al., 2019). Since the cardinality of the resulting configuration set could be very large, the FSL model M as well as the ensemble algorithm is required to be highly efficient. The VD is a nonparametric model and no training is needed during the meta-testing stage, making it suitable for fast geometric ensemble. While CIVD models the cluster-to-point relationship via an influence function, here we further extend it so that cluster-to-cluster relationship can be considered. This motivates us to define Cluster-to-cluster Voronoi Diagram (CCVD): Definition 3.5 (Cluster-to-cluster Voronoi Diagram). Let Ω = {ω1, ..., ωK} be a partition of the space Rn, and C = {C1, ..., CK} be a set of totally ordered sets with the same cardinality L (i.e. |C1| = |C2| = ... = |CK | = L). The set of pairs {(ω1, C1), ..., (ωK , CK)} is a Cluster-to-cluster Voronoi Diagram (CCVD) with respect to the influence function F (Ck, C(z)), and each cell is obtained via ωr = {z ∈ Rn : r(z) = r}, r ∈ {1, ..,K}, with
r(z) = arg max k∈{1,...,K}
F (Ck, C(z)) (9)
where C(z) is the cluster (also a totally ordered set with cardinality L) that query point z belongs, which is to say, all points in this cluster (query cluster) will be assigned to the same cell. Similarly, the Influence Function is defined upon two totally ordered sets Ck = {c(i)k }Li=1 and C(z) = {z(i)}Li=1:
F (Ck, C(z)) = − sign(α) ∑L i=0 d(c (i) k , z (i))α. (10)
With this definition, now we are able to streamline our aforementioned novel approaches into a single ensemble model. Suppose there are totally L possible settings in our configuration pool {T} × {Pw,b,λ} × {M}, for all configurations {ρi}Li=1, we apply them onto the support set S to generate the K totally ordered clusters {{c(ρi)k }Li=1}Kk=1 including each center c (ρi) k derived through configuration ρi, and onto a query sample x to generate the query cluster C(z) = {z(ρ1), ...,z(ρL)}, and then plug these two into Definition 3.5 to construct the final Voronoi Diagram.
When only the feature representation is considered in the configuration pool, i.e. ρi ∈ {T} × {Pw,b,λ}, our FSL process is named as DeepVoro; if surrogate representation is also incorporated, i.e. ρi ∈ {T} × {Pw,b,λ} × {M}, DeepVoro is promoted to DeepVoro++ that allows for higher geometric diversity. See Appendix A for a summary of the notations and acronyms
4 EXPERIMENTS
The main goals of our experiments are to: (1) validate the strength of CIVD to integrate parametric and nonparametric classifiers and confirm the necessity of Voronoi reduc-
tion; (2) investigate how different levels of heterogeneity individually or collaboratively contribute to the overall result, and compare them with the state-of-art method; (3) reanalyze this ensemble when the surrogate representation comes into play, and see how it could ameliorate the extreme shortage of support samples. See Table 2 for a summary and Appendix D for the detailed descriptions of mini-ImageNet (Vinyals et al., 2016), CUB (Welinder et al., 2010), and tiered-ImageNet (Ren et al., 2018), that are used in this paper.
DeepVoro--: Integrating Parametric and Nonparametric Methods via CIVD. To verify our proposed CIVD model for the integration of parameter/nonparametric FSL classifiers, we first run three standalone models: Logistic Regressions with Power/Voronoi Diagrams as the underlining geometric structure (Power-LR/Voronoi-LR), and vanilla Voronoi Diagram (VD, i.e. nearest neighbor model), and then integrate VD with either Power/Voronoi-LR (see Appendix E for details). Interestingly, VD with the Power-LR has never reached the best result, suggesting that ordinary LR cannot
be integrated with VD due to their intrinsic distinct geometric structures. After the proposed Voronoi reduction (Theorem 3.1), however, VD+Voronoi-LR is able to improve upon both models in most cases, suggesting that CIVD can ideally integrate parameter and nonparametric models for better FSL.
DeepVoro: Improving FSL by Hierarchical Heterogeneities. In this section, we only consider two levels of heterogeneities for ensemble: feature-level and transformation-level. For feature-level ensemble, we utilize three kinds of image augmentations: rotation, flipping, and central cropping summing up to 64 distinct ways of data augmentation (Appendix F). For transformation-level ensemble, we use the proposed compositional transformations with 8 different combinations of λ and b that encourage a diverse feature transformations (Appendix F.1) without loss of accuracy (Figure 2). The size of the resulting configuration pool becomes 512 and DeepVoro’s performance is shown in Table 3. Clearly, DeepVoro outperforms all previous methods especially on 5-way 5-shot FSL. Specifically, DeepVoro is better than the next best by 2.18% (than Ni et al. (2021)) on miniImageNet, by 1.47% (than Hu et al. (2021)) on CUB, and by 1.02% (than Yang et al. (2021)) on tiered-ImageNet. Note that this is an estimated improvement because not all competitive methods here are tested with the same random seed and the number of episodes. More detailed results can be found in Appendix F. By virtue of CCVD and using the simplest VD as the building block, DeepVoro is arguably able to yield a consistently better result by the ensemble of a massive pool of independent VD. DeepVoro also exhibits high resistance to outliers, as shown in Figure K.16.
DeepVoro++: Further Improvement of FSL via Surrogate Representation. In surrogate representation, the number of neighbors R for each novel class and the weight β balancing surrogate/feature representations are two hyperparameters. With the help of an available validation set, a natural question is that whether the hyperparameter can be found through the optimization on the validation set, which requires a good generalization of the hyperparameters across different novel classes. From Figure K.13, the accuracy of VD with varying hyperparameter shows a good agreement between testing and validation sets. With this in mind, we select 10 combinations of β and R, guided by the validation set, in conjunction with 2 different feature transformations and 64 different image augmentations, adding up to a large pool of 1280 configurations for ensemble (denoted by DeepVoro++). As shown in Table 3, DeepVoro++ achieves best results for 1-shot FSL — 2.53%
Methods Geometric Structures Feat. Trans. Geo. L 5-way 1-shot 5-way 5-shot
A. mini-ImageNet 5-way 5-shot
B. mini-ImageNet 5-way 1-shot
C. CUB 5-way 5-shot
D. CUB 5-way 1-shot
higher than Zhang et al. (2020b), 2.38% higher than Hu et al. (2021), and 1.09% higher than Zhang et al. (2020b), on three datasets, respectively, justifying the efficacy of our surrogate representation. See Appendix G for more detailed analysis.
Ablation Experiments and Running Time. Table 4 varies the level of heterogeneity (see Table F.4 and G.5 for all datasets). The average accuracy of VDs without CCVD integration is marked by ], and is significantly lower than the fully-fledged ensemble. Table 5 presents the running time of DeepVoro(++) benchmarked in a 20-core Intel© CoreTM i7 CPU with NumPy (v1.20.3), whose efficiency is comparable to DC/S2M2 2, even with >1000× diversity.
Experiments with Different Backbones, Meta-training Protocols, and Domains. Because different feature extraction backbones, meta-training losses, and degree of discrepancy between the source/target domains will all affect the downstream FSL, we here examine the robustness of DeepVoro/DeepVoro++ under a number of different circumstances, and details are shown in Appendices H, I, J. Notably, DeepVoro/DeepVoro++ attains the best performance by up to 5.55%, and is therefore corroborated as a superior method for FSL, regardless of the backbone, training loss, or domain.
5 CONCLUSION
In this paper, our contribution is threefold. We first theoretically unify parametric and nonparametric few-shot classifiers into a general geometric framework (VD) and show an improved result by virtue of this integration (CIVD). By extending CIVD to CCVD, we present a fast geometric ensemble method (DeepVoro) that takes into consideration thousands of FSL configurations with high efficiency. To deal with the extreme data insufficiency in one-shot learning, we further propose a novel surrogate representation which, when incorporated into DeepVoro, promotes the performance of one-shot learning to a higher level (DeepVoro++). In future studies, we plan to extend our geometric approach to meta-learning-based FSL and lifelong FSL.
ACKNOWLEDGMENTS
This research was supported in part by NSF through grant IIS-1910492.
REPRODUCIBILITY STATEMENT
Our code as well as data split, random seeds, hyperparameters, scripts for reproducing the results in the paper are available at https://github.com/horsepurve/DeepVoro.
A NOTATIONS AND ACRONYMS
Parameters for feature-level , transformation-level , and geometry-level heterogeneity are shown
in yellow , blue , and red , respectively. See Sec. F for implementation details. †Here PD is reduced to VD by Theorem 3.1. ‡For every λ (or R), the b (or β) value with the highest validation accuracy is introduced into the configuration pool.
Methods GeometricStructures Centers Tunable Param. # Description
DeepVoro-- CIVD Ck = {ck, c̃k} ck from VD c̃k from PD†
− − −
DeepVoro CCVD Ck = {c (ρi) k }Li=1
ρi ∈ {T} × {Pw,b,λ}
angle of rotation 4 − flipping or not 2 − scaling & cropping 8 − w = 1 − scale factor in linear transformation b 4 shift factor in linear transformation λ 2 exponent in powers transformation
#configurations L = 512
DeepVoro++ CCVD Ck = {c (ρi) k }Li=1
ρi ∈ {T} × {Pw,b,λ} × {M}
angle of rotation 4 − flipping or not 2 − scaling & cropping 8 − w = 1 − scale factor in linear transformation b 1‡ shift factor in linear transformation λ 2 exponent in powers transformation R 10 the number of top-R nearest baseprototypes for a novel prototype γ = 1 − weight for surrogate representation β 1‡ weight for feature representation
#configurations L = 1280
B POWER DIAGRAM SUBDIVISION AND VORONOI REDUCTION
B.1 PROOF OF THEOREM 3.1
Lemma B.1. The vertical projection from the lower envelope of the hyperplanes {Πk(z) : W Tk z+ bk}Kk=1 onto the input space Rn defines the cells of a PD.
Theorem 3.1 (Voronoi Diagram Reduction). The linear classifier parameterized by W , b partitions the input space Rn to a Voronoi Diagram with centers {c̃1, ..., c̃K} given by c̃k = 12Wk if bk = − 14 ||Wk|| 2 2, k = 1, ...,K.
Proof. We first articulate Lemma B.1 and find the exact relationship between the hyperplane Πk(z) and the center of its associated cell in Rn. By Definition 3.1, the cell for a point z ∈ Rn is found by comparing d(z, ck)2 − νk for different k, so we define the power function p(z, S) expressing this value
p(z, S) = (z − u)2 − r2 (11)
in which S ⊆ Rn is a sphere with center u and radius r. In fact, the weight ν associated with a center in Definition 3.1 can be intepreted as the square of the radius r2. Next, let U denote a paraboloid y = z2, let Π(S) be the transform that maps sphere S with center u and radius r into hyperplane
Π(S) : y = 2z · u− u · u + r2. (12)
It can be proved that Π is a bijective mapping between arbitrary spheres in Rn and nonvertical hyperplanes in Rn+1 that intersect U (Aurenhammer, 1987). Further, let z′ denote the vertical projection of z onto U and z′′ denote its vertical projection onto Π(S), then the power function can be written as
p(z, S) = d(z, z′)− d(z, z′′), (13)
which implies the following relationships between a sphere in Rn and an associated hyperplane in Rn+1 (Lemma 4 in Aurenhammer (1987)): let S1 and S2 be nonco-centeric spheres in Rn, then the bisector of their Power cells is the vertical projection of Π(S1) ∩ Π(S2) onto Rn. Now, we have a direct relationship between sphere S, and hyperplane Π(S), and comparing equation (12) with the hyperplanes used in logistic regression {Πk(z) : W Tk z + bk}Kk=1 gives us
u = 1
2 Wk
r2 = bk + 1
4 ||Wk||22.
(14)
Although there is no guarantee that bk + 14 ||Wk|| 2 2 is always positive for an arbitrary logistic regression model, we can impose a constraint on r2 to keep it be zero during the optimization, which implies
bk = − 1
4 ||Wk||22. (15)
By this way, the radii for allK spheres become identical (all zero). After the optimization of logistic regression model, the centers { 12Wk} K k=1 will be used for CIVD integration.
C DETAILS ABOUT THE DEMONSTRATIVE EXAMPLE ON MULTIDIGITMNIST DATASET
MultiDigitMNIST (Sun, 2019) dataset is created by concatenating two (or three) digits of different classes from MNIST for few-shot image classification. Here we use DoubleMNIST Datasets (i.e. two digits in an image) consisting of 100 classes (00 to 09), 1000 images of size 64 × 64 × 1 per class, and the classes are further split into 64, 20, and 16 classes for training, testing, and validation, respectively. To better embed into the R2 space, we pick a ten-classes subset (00, 01, 12, 13, 04, 05, 06, 77, 08, and 09) as the base classes for meta-training, and another five-class subset (02, 49, 83, 17, and 36) for one episode. The feature extractor is a 4-layer convolutional network with an additional fully-connected layer for 2D embedding. In Figure 1 left panel, the VD is obtained by setting the centroid of each base class as the Voronoi center. For each novel class, the Voronoi center is simply the 1-shot support sample (Figure 1 central panel). The surrogate representation is computed as the collection of distances from a support/query sample to each of the base classes, as shown in Figure 1 right panel. Interestingly, the surrogate representations for a novel class, no matter if it is a support sample (dotted line) or a query sample (colored line) generally follow a certain pattern — akin within a class, distinct cross class — make them ideal surrogates for distinguishing between different novel classes. In our paper, we design a series of algorithms answering multiple questions regarding this surrogate representation: how to select base classes for the calculation of surrogate representation, how to combine it with feature representation, and how to integrate it into the overall ensemble workflow.
D MAIN DATASETS
For a fair and thorough comparison with previous works, three widely-adopted benchmark datasets are used throughout this paper.
(1) mini-ImageNet (Vinyals et al., 2016) is a shrunk subset of ILSVRC-12 (Russakovsky et al., 2015), consists of 100 classes in which 64 classes for training, 20 classes for testing and 16 classes for validation. Each class has 600 images of size 84× 84× 3. (2) CUB (Welinder et al., 2010) is another benchmark dataset for FSL, especially fine-grained FSL, including 200 species (classes) of birds. CUB is an unbalanced dataset with 58 images in average per class, also of size 84 × 84 × 3. We split all classes into 100 base classes, 50 novel classes, and 50 validation classes, following previous works (Chen et al., 2019a).
(3) tiered-ImageNet (Ren et al., 2018) is another subset of ILSVRC-12 (Russakovsky et al., 2015) but has more images, 779,165 images in total. All images are categorized into 351 base classes, 97 validation classes, and 160 novel classes. The number of images in each class is not always the same, 1281 in average. The image size is also 84× 84× 3.
E DEEPVORO--: INTEGRATING PARAMETRIC AND NONPARAMETRIC METHODS VIA CIVD
Table E.3: Cluster-induced Voronoi Diagram (CIVD) for the integration of parametric Logistic Regression (LR) and nonparametric nearest neighbor (i.e. Voronoi Diagram, VD) methods. The results from S2M2 R and DC are also included in this table but excluded for comparison. Best result is marked in bold.
Methods mini-Imagenet CUB tiered-ImageNet
5-way 1-shot 5-way 5-shot 5-way 1-shot 5-way 5-shot 5-way 1-shot 5-way 5-shot
S2M2 R 64.65 ± 0.45 83.20 ± 0.30 80.14 ± 0.45 90.99 ± 0.23 68.12 ± 0.52 86.71 ± 0.34 DC 67.79 ± 0.45 83.69 ± 0.31 79.93 ± 0.46 90.77 ± 0.24 74.24 ± 0.50 88.38 ± 0.31 Power-LR 65.45 ± 0.44 84.47 ± 0.29 79.66 ± 0.44 91.62 ± 0.22 73.57 ± 0.48 89.07 ± 0.29 Voronoi-LR 65.58 ± 0.44 84.51 ± 0.29 79.63 ± 0.44 91.61 ± 0.22 73.65 ± 0.48 89.15 ± 0.29 VD 65.37 ± 0.44 84.37 ± 0.29 78.57 ± 0.44 91.31 ± 0.23 72.83 ± 0.49 88.58 ± 0.29
CIVD-based DeepVoro--
VD + Power-LR 65.63 ± 0.44 84.25 ± 0.30 79.52 ± 0.43 91.52 ± 0.22 73.68 ± 0.48 88.71 ± 0.29 VD + Voronoi-LR 65.85 ± 0.43 84.66 ± 0.29 79.40 ± 0.44 91.57 ± 0.22 73.78 ± 0.48 89.02 ± 0.29
E.1 EXPERIMENTAL SETUP AND IMPLEMENTATION DETAILS
In this section, we first establish three few-shot classification models with different underlying geometric structures, two logistic regression (LR) models and one nearest neighbor model: (1) Power Diagram-based LR (Power-LR), (2) Voronoi Diagram-based LR (Voronoi-LR), and (3) Voronoi Diagram (VD). Then, the main purposes of our analysis are (1) to examine how the performance is affected by the proposed Voronoi Reduction method in Sec. 3.2, and (2) to inspect whether VD can be integrated with Power/Voronoi Diagram-based LRs.
The feature transformation used throughout this section is Pw,b,λ with w = 1.0, b = 0.0, λ = 0.5. For Power-LR, we train it directly on the transformedK-wayN -shot support samples using PyTorch library with an Adam optimizer with batch size at 64 and learning rate at 0.01. For Voronoi-LR, the vanilla LR is retrofitted as shown in Algorithm 1, in which the bias is given by Theorem 3.1 to make sure that the parameters induce a VD in each iteration.
In our CIVD model in Definition 3.2, we use a cluster instead of a single prototype to stand for a novel class. Here this cluster contains two points, i.e. Ck = {ck, c̃k}, in which ck is obtained from VD, and c̃k is acquired from Power-LR or Voronoi-LR. The question we intend to answer here is that whether Power-LR or Voronoi-LR is the suitable model for the integration.
Algorithm 1: Voronoi Diagram-based Logistic Regression. Data: Support Set S Result: W 1 Initialize W ←W (0); 2 for epoch← 1, ..., #epoch do 3 bk ← − 14 ||Wk|| 2 2,∀k = 1, ...,K ; / Apply Theorem 3.1 4 Compute loss L(W , b) ; / forward propagation 5 Update W ; / backward propagation 6 end 7 return W
40 20 0 20
60
40
20
0
20
40
60
80 A. No Transformation
40 20 0 20 60
40
20
0
20
40
60
80 B. L2 Normalization
100 50 0
40
20
0
20
40
C. Power Transformation
40 60 80
40
20
0
20
40
D. Log Transformation
Figure F.1: The t-SNE visualizations of (A) original features, (B) L2 normalization, (C) Tukey’s Ladder of Powers Transformation with λ = 0.5, and (D) compositional transformation with λ = 0, w = 1, b = 0.04 of 5 novel classes from mini-ImageNet dataset.
E.2 RESULTS
The results are shown in Table E.3. Interestingly, when integrated with VD, Power-LR never reaches the best result, suggesting that VD and LR are intrinsic different geometric models, and cannot be simply integrated together without additional effort. On mini-ImageNet and tiered-ImageNet datasets, the best results are achieved by either Voronoi-LR or VD+Voronoi-LR, showing that CIVD coupled with the proposed Voronoi reduction can ideally integrate parametric and nonparametric few-shot models. Notably, on these two datasets, when Power-LR is reduced to Voronoi-LR, although the number of parameters is decreased (b is directly given by Theorem 3.1, not involved in the optimization), the performance is always better, for example, increases from 65.45% to 65.58% on 5-way 1-shot mini-ImageNet data. On CUB dataset, results of different models are similar, probably because CUB is a fine-grained dataset and all classes are similar to each other (all birds).
F DEEPVORO: IMPROVING FSL VIA HIERARCHICAL HETEROGENEITIES
F.1 EXPERIMENTAL SETUP AND IMPLEMENTATION DETAILS
In this section we describe feature-level and transformation-level heterogeneities that are used for ensemble in order to improve FSL. See the next section for geometry-level heterogeneity.
Feature-level heterogeneity. Considering the reproducibility of the methodology, we only employ deterministic data augmentation upon the images without randomness involved. Specifically, three kinds of data augmentation techniques are used. (1) Rotation is an important augmentation method widely used in self-supervised learning (Mangla et al., 2020). Rotating the original images by 0°, 90°, 180°, and 270°gives us four ways of augmentation. (2) After rotation, we can flip the images horizontally, giving rise to additional two choices after each rotation degree. (3) Central cropping after scaling can alter the resolution and focus area of the image. Scaling the original images to (84+B)×(84+B),B increasing from 0 to 70 with step 10, bringing us eight ways of augmentation.
Finally, different combinations of the three types result in 64 kinds of augmentation methods (i.e. |{T}| = 64). Transformation-level heterogeneity. In our compositional transformation, the function (hλ◦gw,b◦ f)(z) is parameterized by w, b, λ. Since g is appended after the L2 normalization f , the vector comes into g is always a unit vector, so we simply set w = 1. For the different combinations of λ and b, we test different values with either λ = 0 or λ 6= 0 on the hold-out validation set (as shown in Figure 2 and K.12), and pick top-8 combinations with the best performance on the validation set.
Ensemble Schemes. Now, in our configuration pool {T} × {Pw,b,λ}, there are 512 possible configurations {ρ(i)}512i=1. For each ρ, we apply it on both the testing and the validation sets. With this large pool of ensemble candidates, how and whether to select a subset {ρ(i)}L′i=1 ⊆ {ρ(i)}512i=1 is still a nontrivial problem. Here we explore three different schemes. (1) Full (vanilla) ensemble. All candidates in {ρ(i)}512i=1 are taken into consideration and then plugged into Definition 3.5 to build the CIVD for space partition. (2) Random ensemble. A randomly selected subset with size L′ < L is used for ensemble. (3) Guided ensemble. We expect the performance for {ρ(i)}512i=1 on the validation set can be used to guide the selection of {ρ(i)}L′i=1 from the testing set, provided that there is good correlation between the testing set and the validation set. Specifically, we rank the configurations in the validation set with regard to their performance, and add them sequentially into {ρ(i)}L′i=1 until a maximum ensemble performance is reached on the validation set, then we use this configuration set for the final ensemble. Since VD is nonparametric and fast, we adopt VD as the building block and only use VD for each ρ for the remaining part of the paper. The α value in the influence function (Definition 3.3) is set at 1 throughout the paper, for the simplicity of computation.
For a fair comparison, we downloaded the trained models1 used by Mangla et al. (2020) and Yang et al. (2021). The performance of FSL algorithms is typically evaluated by a sequence of independent episodes, so the data split and random seed for the selection of novel classes as well as support/query set in each episode will all lead to different result. To ensure the fairness of our evaluation, DC (Yang et al., 2021), and S2M2 R (Mangla et al., 2020) are reevaluated with the same data split and random seed as DeepVoro. The results are obtained by running 2000 episodes and the average accuracy as well as 95% confidence intervals are reported.
F.2 RESULTS
Table F.4: Ablation study of DeepVoro’s performance with different levels of ensemble. The number of ensemble members are given in parentheses.
Methods Feature-level Transformation-level mini-ImageNet CUB tiered-ImageNet
1-shot 5-shot 1-shot 5-shot 1-shot 5-shot
No Ensemble 8 8 65.37 ± 0.44 84.37 ± 0.29 78.57 ± 0.44 91.31 ± 0.23 72.83 ± 0.49 88.58 ± 0.29 Vanilla Ensemble (8) 8 4 66.45 ± 0.44 84.55 ± 0.29 80.98 ± 0.44 91.47 ± 0.22 74.02 ± 0.49 88.90 ± 0.29 Vanilla Ensemble (64) 4 8 67.88 ± 0.45 86.39 ± 0.29 77.30 ± 0.43 91.26 ± 0.23 73.74 ± 0.49 88.67 ± 0.29 Vanilla Ensemble (512) 4 4 69.23 ± 0.45 86.70 ± 0.28 79.90 ± 0.43 91.70 ± 0.22 74.51 ± 0.48 89.11 ± 0.29 Random Ensemble (512) 4 4 69.30 ± 0.45 86.74 ± 0.28 80.40 ± 0.43 91.94 ± 0.22 74.64 ± 0.48 89.15 ± 0.29 Guided Ensemble (512) 4 4 69.48 ± 0.45 86.75 ± 0.28 82.99 ± 0.43 92.62 ± 0.22 74.98 ± 0.48 89.40 ± 0.29
Our proposed compositional transformation enlarges the expressivity of the transformation function. When the Tukey’s ladder of powers transformation is used individually, as reported in Yang et al. (2021), the optimal λ is not 0, but if an additional linear transformation g is inserted between f and h, λ = 0 coupled with a proper b can give even better result, as shown in Figure 2 and K.12. Importantly, from Figure 2, a combination of λ and b with good performance on the validation set can also produce satisfactory result on the testing set, suggesting that it is possible to optimize the hyperparameters on the validation set and generalize well on the testing set. In terms of the polymorphism induced by various transformations in the feature space, Figure F.1 exhibits the t-SNE visualizations of the original features and the features after three different kinds of transformations, showing that the relative positions of different novel classes is largely changes especially after compositional transformation (as shown in D). Besides commonly used data augmentation, this transformation provides another level of diversity that may be beneficial to the subsequent ensemble.
The results for different levels of ensemble are shown in Table F.4, in which the number of ensemble member are also indicated. Although transformation ensemble does not involve any change to the feature, it can largely improve the results for 1-shot FSL, from 65.37% to 66.45% on mini-ImageNet,
1downloaded from https://github.com/nupurkmr9/S2M2_fewshot
from 78.57% to 80.98% on CUB, and from 72.83% to 74.02% on tiered-ImageNet, respectively, probably because 1-shot FSL is more prone to overfitting due to its severe data deficiency. Featurelevel ensemble, on the other hand, is more important for 5-shot FSL, especially for mini-ImageNet. When combining the two levels together, the number of ensemble members increases to 512 and the performance significantly surpasses each individual level. On all three datasets, the guided ensemble scheme always achieves the best result for both single-shot and multi-shot cases, showing that the validation set can indeed be used for the guidance of subset selection and our method is robust cross classes in the same domain. When there is no such validation set available, the full ensemble and random ensemble schemes can also give comparable result.
To inspect how performance changes with different number of ensemble members, we exhibit the distribution of accuracy at three ensemble levels for mini-ImageNet in Figure F.2 and F.3 , for CUB in Figure F.4 and F.5, and for tiered-ImageNet in Figure F.6 and F.7. Figure (b) in each of them also exhibits the correlation between the testing and validation sets for all 512 configurations. Clearly, better result is often reached when there are more configurations for the ensemble, validating the efficacy of our method for improving the performance and robustness for better FSL. Algorithm 2: VD with Surrogate Representation for Episode T .
Data: Base classes D, Support Set S = {(xi, yi)}K×Ni=1 , yi ∈ CT , query sample x Result: d̃
1 D′ ← (Pw,b,λ ◦ φ ◦ T )(D) ; / Extract and transform feature 2 S ′ ← (Pw,b,λ ◦ φ ◦ T )(S); 3 z ← (Pw,b,λ ◦ φ ◦ T )(x); 4 for t← 1, ..., |Cbase|; / Compute prototypes of base classes 5 do 6 c′t ← 1|{(z′,y)|z′∈D′,y=t}| ∑ z′∈D′,y=tz ′ 7 end 8 for k ← 1, ...,K; / Compute prototypes from support samples 9 do
10 ck ← 1N ∑ z′∈S′,y=k z ′; 11 dk ← d(z, ck) 12 end 13 Csurrogate ← ∅; 14 for k ← 1, ...,K; / Find surrogate classes 15 do 16 Csurrogate ← Csurrogate ⋃ Top-R
t∈{1,...,|Cbase|} d(ck, c
′ t)
17 end 18 R̃← |Csurrogate|; 19 d′ ← (d(z, c′1), ..., d(z, c′R̃)) ; / Compute surrogate representation for query sample 20 for k ← 1, ...,K; / Compute surrogate representations for support samples 21 do 22 d′k ← (d(ck, c′1), ..., d(ck, c′R̃)); 23 d′′k ← d(d′,d′k) 24 end 25 d̃← β d||d||1 + γ d′′ ||d′′||1 ; / Compute final criterion 26 return d̃
1 2 3 4 5 6 7 8 Number of Ensemble Members
84.35
84.40
84.45
84.50
84.55
84.60
Ac cu
ra cy
mini-ImageNet transformation-level Ensemble Full Ensemble
(a) Transformation-level Ensemble
82 83 84 85 86 87 Testing Set Accuracy
85.0
85.5
86.0
86.5
87.0
87.5
88.0
88.5
Va lid
at io
n Se
t A cc
ur ac
y
Transformation-level Feature-level Dirichlet Tessellation Ensemble DC S2M2-R
mini-ImageNet 5-way 5-shot
Ensemble Ensemble
eepVoro
(b) Testing/Validation Sets Correlation
1 2 3 4 5 6 7 8 9 10111213141516171819202122232425262728293031323334353637383940414243444546474849505152535455565758596061626364 Number of Ensemble Members
84.0
84.5
85.0
85.5
86.0
86.5
Ac cu
ra cy
mini-ImageNet feature-level Ensemble
Full Ensemble
(c) Feature-level Ensemble on 5-way 5-shot mini-ImageNet Dataset
0 100 200 300 400 500 Number of Ensemble Members
84.5
85.0
85.5
86.0
86.5
Ac cu
ra cy
mini-ImageNet Dirichlet Tessellation Ensemble
Random Ensemble Guided Ensemble Full Ensemble
(d) DeepVoro on 5-way 5-shot mini-ImageNet Dataset
Figure F.2: Three levels of ensemble and the correlation between testing and validation sets with different configurations in the configuration pool.
1 2 3 4 5 6 7 8 Number of Ensemble Members
65.4
65.6
65.8
66.0
66.2
66.4
66.6
66.8
Ac cu
ra cy
mini-ImageNet transformation-level Ensemble
Full Ensemble
(a) Transformation-level Ensemble
63 64 65 66 67 68 69 Testing Set Accuracy
66
67
68
69
70
71
72
Va lid
at io
n Se
t A cc
ur ac
y
Transformation-level Feature-level Dirichlet Tessellation Ensemble DC S2M2-R
mini-ImageNet 5-way 1-shot
Ensemble Ensemble
eepVoro
(b) Testing/Validation Sets Correlation
1 2 3 4 5 6 7 8 9 10111213141516171819202122232425262728293031323334353637383940414243444546474849505152535455565758596061626364 Number of Ensemble Members
65.0
65.5
66.0
66.5
67.0
67.5
68.0
Ac cu
ra cy
mini-ImageNet feature-level Ensemble
Full Ensemble
(c) Feature-level Ensemble on 5-way 1-shot mini-ImageNet Dataset
0 100 200 300 400 500 Number of Ensemble Members
66.0
66.5
67.0
67.5
68.0
68.5
69.0
69.5
Ac cu
ra cy
mini-ImageNet Dirichlet Tessellation Ensemble
Random Ensemble Guided Ensemble Full Ensemble
(d) DeepVoro on 5-way 1-shot mini-ImageNet Dataset
Figure F.3: Three levels of ensemble and the correlation between testing and validation sets with different configurations in the configuration pool.
1 2 3 4 5 6 7 8 Number of Ensemble Members
90.6
90.8
91.0
91.2
91.4
91.6
Ac cu
ra cy
CUB transformation-level Ensemble Full Ensemble
(a) Transformation-level Ensemble
82 84 86 88 90 92 Testing Set Accuracy
80
82
84
86
88
90
92
Va lid
at io
n Se
t A cc
ur ac
y
Transformation-level Feature-level Dirichlet Tessellation Ensemble DC S2M2-R
CUB 5-way 5-shot
Ensemble Ensemble
eepVoro
(b) Testing/Validation Sets Correlation
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 Number of Ensemble Members
84
86
88
90
92
Ac cu
ra cy
CUB feature-level Ensemble Full Ensemble
(c) Feature-level Ensemble on 5-way 5-shot CUB Dataset
0 100 200 300 400 500 Number of Ensemble Members
86
88
90
92
Ac cu
ra cy
CUB Dirichlet Tessellation Ensemble
Random Ensemble Guided Ensemble Full Ensemble
(d) DeepVoro on 5-way 5-shot CUB Dataset
Figure F.4: Three levels of ensemble and the correlation between testing and validation sets with different configurations in the configuration pool.
1 2 3 4 5 6 7 8 Number of Ensemble Members
79.50
79.75
80.00
80.25
80.50
80.75
81.00
81.25
81.50
Ac cu
ra cy
CUB transformation-level Ensemble Full Ensemble
(a) Transformation-level Ensemble
62.5 65.0 67.5 70.0 72.5 75.0 77.5 80.0 82.5 Testing Set Accuracy
62.5
65.0
67.5
70.0
72.5
75.0
77.5
80.0
82.5
Va lid
at io
n Se
t A cc
ur ac
y
Transformation-level Feature-level Dirichlet Tessellation Ensemble DC S2M2-R
CUB 5-way 1-shot
Ensemble Ensemble
eepVoro
(b) Testing/Validation Sets Correlation
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 Number of Ensemble Members
68
70
72
74
76
78
80
Ac cu
ra cy
CUB feature-level Ensemble Full Ensemble
(c) Feature-level Ensemble on 5-way 1-shot CUB Dataset
0 100 200 300 400 500 Number of Ensemble Members
67.5
70.0
72.5
75.0
77.5
80.0
82.5
Ac cu
ra cy
CUB Dirichlet Tessellation Ensemble
Random Ensemble Guided Ensemble Full Ensemble
(d) DeepVoro on 5-way 1-shot CUB Dataset
Figure F.5: Three levels of ensemble and the correlation between testing and validation sets with different configurations in the configuration pool.
1 2 3 4 5 6 7 8 Number of Ensemble Members
88.65
88.70
88.75
88.80
88.85
88.90
88.95
Ac cu
ra cy
tiered-ImageNet transformation-level Ensemble
Full Ensemble
(a) Transformation-level Ensemble
85 86 87 88 89 90 Testing Set Accuracy
83
84
85
86
87
Va lid
at io
n Se
t A cc
ur ac
y
Transformation-level Feature-level Dirichlet Tessellation Ensemble DC S2M2-R
tiered-ImageNet 5-way 5-shot
Ensemble Ensemble
eepVoro
(b) Testing/Validation Sets Correlation
1 2 3 4 5 6 7 8 9 10111213141516171819202122232425262728293031323334353637383940414243444546474849505152535455565758596061626364 Number of Ensemble Members
86.5
87.0
87.5
88.0
88.5
89.0
Ac cu
ra cy
tiered-ImageNet feature-level Ensemble
Full Ensemble
(c) Feature-level Ensemble on 5-way 5-shot tiered-ImageNet Dataset
0 100 200 300 400 500 Number of Ensemble Members
85
86
87
88
89
Ac cu
ra cy
tiered-ImageNet Dirichlet Tessellation Ensemble
Random Ensemble Guided Ensemble Full Ensemble
(d) DeepVoro on 5-way 5-shot tiered-ImageNet Dataset
Figure F.6: Three levels of ensemble and the correlation between testing and validation sets with different configurations in the configuration pool.
1 2 3 4 5 6 7 8 Number of Ensemble Members
73.2
73.4
73.6
73.8
74.0
Ac cu
ra cy
tiered-ImageNet transformation-level Ensemble
Full Ensemble
(a) Transformation-level Ensemble
68 69 70 71 72 73 74 75 Testing Set Accuracy
65
66
67
68
69
70
71
72
73
Va lid
at io
n Se
t A cc
ur ac
y
Transformation-level Feature-level Dirichlet Tessellation Ensemble DC S2M2-R
tiered-ImageNet 5-way 1-shot
Ensemble Ensemble
eepVoro
(b) Testing/Validation Sets Correlation
1 2 3 4 5 6 7 8 9 10111213141516171819202122232425262728293031323334353637383940414243444546474849505152535455565758596061626364 Number of Ensemble Members
71.0
71.5
72.0
72.5
73.0
73.5
74.0
Ac cu
ra cy
tiered-ImageNet feature-level Ensemble
Full Ensemble
(c) Feature-level Ensemble on 5-way 1-shot tiered-ImageNet Dataset
0 100 200 300 400 500 Number of Ensemble Members
70
71
72
73
74
75
Ac cu
ra cy
tiered-ImageNet Dirichlet Tessellation Ensemble
Random Ensemble Guided Ensemble Full Ensemble
(d) DeepVoro on 5-way 1-shot tiered-ImageNet Dataset
Figure F.7: Three levels of ensemble and the correlation between testing and validation sets with different configurations in the configuration pool.
0 100 200 300 400 Number of Shots
65
70
75
80
85
90
Ac cu
ra cy
1: 65.31%
3: 80.12%
5: 84.05% 7: 85.95% 10: 87.60% 15: 88.91%20: 89.75%
40: 90.63% 100: 91.18% 200: 91.22% 400: 91.55%
Effect of the number of shots on mini-ImageNet dataset
Figure G.8: The accuracy of VD with increasing number of shots on mini-ImageNet dataset.
G DEEPVORO++: FURTHER IMPROVEMENT OF FSL VIA SURROGATE REPRESENTATION
G.1 EXPERIMENTAL SETUP AND IMPLEMENTATION DETAILS
In this section, we introduce another layer of heterogeneity, that is, geometry-level, that exists in our surrogate representation. In Definition 3.4, increasing R will enlarge the degree of locality when searching for the top-R surrogate classes. In equation (8), if we set γ = 1 then increasing β will make the model rely more on the feature representation and less on the surrogate representation. In order to weigh up R and β, we perform a grid search for different combinations of R and β on the validation set, as shown in Figure K.13, K.14, and K.15. For each R, we select the β that gives rise to the best result on the validation set, and use this (R, β) on the testing set, resulting in 10 such pairs in total. So there are 10 models in the geometry-level heterogeneity, standing for different degrees of locality. In conjunction with feature-level (64 kinds of augmentations) and transformation-level (here only the top-2 best transformations are used) heterogeneities, now there are 1280 different kinds of configurations in our configuration pool that will be used by the CCVD model. In conclusion, there are overall 512 + 1280 = 1792 configurations for a few-shot episode. Generating∼1800 ensemble candidates is nearly intractable for parametric methods like logistic regression or cosine classifier, which may consume e.g. months for thousands of episodes. However, the VD model is nonparametric and highly efficient, making it empirically possible to collect all the combinations and integrate them all together via CCVD. The complete algorithm for the computation of surrogate representation is shown in Algorithm 2.
G.2 RESULTS
The heatmaps for different (R, β) pairs on testing/validation sets are shown in Figure K.13 for miniImageNet, in Figure K.14 for CUB, and in Figure K.15 for tiered-ImageNet, respectively. Basically, the testing and validation set follow the same pattern. When R is small, i.e. only a small number of base classes are used for surrogate, then a higher weight should be placed on feature representation. With a fixed β, increasing R beyond a certain threshold will potentially cause a drop in accuracy, probably because the meaningful similarities is likely to be overwhelmed by the signals from the large volume of irrelevant base classes.
Table G.5: Ablation study of DeepVoro++’s performance with different levels of ensemble. The number of ensemble members are given in parentheses.
Methods Feature-level Transformation-level Geometry-level mini-ImageNet CUB tiered-ImageNet
No Ensemble 8 8 8 65.37 ± 0.44 78.57 ± 0.44 72.83 ± 0.49 Vanilla Ensemble (20) 8 4 4 68.38 ± 0.46 80 | 1. What is the main contribution of the paper in few-shot classification?
2. What are the strengths of the proposed approach, particularly in its mathematical formulation and integration of various classifiers and representations?
3. What are the weaknesses of the paper regarding its readability and the novelty of the proposed methods and features?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Review | Summary Of The Paper
The authors introduce the use of Cluster-induced Voronoi Diagrams to few shot classification, and show that it can be used to combine feature and surrogate representations with various types of few-shot classifiers (eg nearest neighbour, linear classifier and cosine) and various types of heterogeneities (eg feature level, transformation level and geometry level) into a single, coherent, mathematical formulation.
Review
Positive:
The use of the Cluster-induced Voronoi Diagram and its variant introduced in this paper are novel to FSL (to the best of my knowledge).
The resulting voronoi-diagram formulation is geometrically elegant, and allows the integration of various classifiers, heterogeneities and types of feature representations.
The results appear to be state of the art (at least relative to the compared works) and standard datasets used by the community.
The ablation study appears to be a good first order approximation of what I'd expect the authors to verify.
Negative:
The paper is a bit difficult to read. I'd attribute this primarily to the extended use of the appendix. I agree that proofs can be included primarily in the appendix, but I do not particularly appreciate that much of the ablation results and tables are included only in the appendix.
Beyond the geometric point of view of FSL (which I do appreciate), to my understanding, the paper proposes a relatively straightforward aggregation of method/features/representations for FSL. Furthermore, the classifiers it integrates and the various heterogeneities are not novel, and neither is the cluster-induced Voronoi Diagram (which itself is a relatively straightforward generalization of voronoi diagrams). |
ICLR | Title
Few-shot Learning via Dirichlet Tessellation Ensemble
Abstract
Few-shot learning (FSL) is the process of rapid generalization from abundant base samples to inadequate novel samples. Despite extensive research in recent years, FSL is still not yet able to generate satisfactory solutions for a wide range of real-world applications. To confront this challenge, we study the FSL problem from a geometric point of view in this paper. One observation is that the widely embraced ProtoNet model is essentially a Voronoi Diagram (VD) in the feature space. We retrofit it by making use of a recent advance in computational geometry called Cluster-induced Voronoi Diagram (CIVD). Starting from the simplest nearest neighbor model, CIVD gradually incorporates cluster-to-point and then cluster-to-cluster relationships for space subdivision, which is used to improve the accuracy and robustness at multiple stages of FSL. Specifically, we use CIVD (1) to integrate parametric and nonparametric few-shot classifiers; (2) to combine feature representation and surrogate representation; (3) and to leverage feature-level, transformation-level, and geometry-level heterogeneities for a better ensemble. Our CIVD-based workflow enables us to achieve new state-of-the-art results on mini-ImageNet, CUB, and tiered-ImagenNet datasets, with ∼2%−5% improvements upon the next best. To summarize, CIVD provides a mathematically elegant and geometrically interpretable framework that compensates for extreme data insufficiency, prevents overfitting, and allows for fast geometric ensemble for thousands of individual VD. These together make FSL stronger.
1 INTRODUCTION
Recent years have witnessed a tremendous success of deep learning in a number of data-intensive applications; one critical reason for which is the vast collection of hand-annotated high-quality data, such as the millions of natural images for visual object recognition (Deng et al., 2009). However, in many real-world applications, such large-scale data acquisition might be difficult and comes at a premium, such as in rare disease diagnosis (Yoo et al., 2021) and drug discovery (Ma et al., 2021b; 2018). As a consequence, Few-shot Learning (FSL) has recently drawn growing interests (Wang et al., 2020).
Generally, few-shot learning algorithms can be categorized into two types, namely inductive and transductive, depending on whether estimating the distribution of query samples is allowed. A typical transductive FSL algorithm learns to propagate labels among a larger pool of query samples in a semi-supervised manner (Liu et al., 2019); notwithstanding its normally higher performance, in many real world scenarios a query sample (e.g. patient) also comes individually and is unique, for instance, in personalized pharmacogenomics (Sharifi-Noghabi et al., 2020). Thus, we in this paper adhere to the inductive setting and make on-the-fly prediction for each newly seen sample.
Few-shot learning is challenging and substantially different from conventional deep learning, and has been tackled by many researchers from a wide variety of angles. Despite the extensive research
All four authors are corresponding authors.
on the algorithmic aspects of FSL (see Sec. 2), two challenges still pose an obstacle to successful FSL: (1) how to sufficiently compensate for the data deficiency in FSL? and (2) how to make the most use of the base samples and the pre-trained model?
For the first question, data augmentation has been a successful approach to expand the size of data, either by Generative Adversarial Networks (GANs) (Goodfellow et al., 2014) (Li et al., 2020b; Zhang et al., 2018) or by variational autoencoders (VAEs) (Kingma & Welling, 2014) (Zhang et al., 2019; Chen et al., 2019b). However, in each way, the authenticity of either the augmented data or the feature is not guaranteed, and the out-of-distribution hallucinated samples (Ma et al., 2019) may hinder the subsequent FSL. Recently, Liu et al. (2020b) and Ni et al. (2021) investigate supportlevel, query-level, task-level, and shot-level augmentation for meta-learning, but the diversity of FSL models has not been taken into consideration. For the second question, Yang et al. (2021) borrows the top-2 nearest base classes for each novel sample to calibrate its distribution and to generate more novel samples. However, when there is no proximal base class, this calibration may utterly alter the distribution. Another line of work (Sbai et al., 2020; Zhou et al., 2020) learns to select and design base classes for a better discrimination on novel classes, which all introduce extra training burden. As a matter of fact, we still lack a method that makes full use of the base classes and the pretrained model effectively.
In this paper, we study the FSL problem from a geometric point of view. In metric-based FSL, despite being surprisingly simple, the nearest neighbor-like approaches, e.g. ProtoNet (Snell et al., 2017) and SimpleShot (Wang et al., 2019), have achieved remarkable performance that is even better than many sophisticatedly designed methods. Geometrically, what a nearest neighbor-based method does, under the hood, is to partition the feature space into a Voronoi Diagram (VD) that is induced by the feature centroids of the novel classes. Although it is highly efficient and simple, Voronoi Diagrams coarsely draw the decision boundary by linear bisectors separating two centers, and may lack the ability to subtly delineate the geometric structure arises in FSL.
To resolve this issue, we adopt a novel technique called Cluster-induced Voronoi Diagram (CIVD) (Chen et al., 2013; 2017; Huang & Xu, 2020; Huang et al., 2021), which is a recent breakthrough in computation geometry. CIVD generalizes VD from a point-to-point distance-based diagram to a cluster-to-point influence-based structure. It enables us to determine the dominating
region (or Voronoi cell) not only for a point (e.g. a class prototype) but also for a cluster of points, guaranteed to have a (1 + )-approximation with a nearly linear size of diagram for a wide range of locally dominating influence functions. CIVD provides us a mathematically elegant framework to depict the feature space and draw the decision boundary more precisely than VD without losing the resistance to overfitting.
Accordingly, in this paper, we show how CIVD is used to improve multiple stages of FSL and make several contributions as follows.
1. We first categorize different types of few-shot classifiers as different variants of Voronoi Diagram: nearest neighbor model as Voronoi Diagram, linear classifier as Power Diagram, and cosine classifier as spherical Voronoi Diagram (Table 1). We then unify them via CIVD that enjoys the advantages of multiple models, either parametric or nonparametric (denoted as DeepVoro--).
2. Going from cluster-to-point to cluster-to-cluster influence, we further propose Cluster-to-cluster Voronoi Diagram (CCVD), as a natural extension of CIVD. Based on CCVD, we present DeepVoro which enables fast geometric ensemble of a large pool of thousands of configurations for FSL.
3. Instead of using base classes for distribution calibration and data augmentation (Yang et al., 2021), we propose a novel surrogate representation, the collection of similarities to base classes, and thus promote DeepVoro to DeepVoro++ that integrates feature-level, transformation-level, and geometry-level heterogeneities in FSL.
Extensive experiments have shown that, although a fixed feature extractor is used without independently pretrained or epoch-wise models, our method achieves new state-of-the-art results on all
three benchmark datasets including mini-ImageNet, CUB, and tiered-ImageNet, and improves by up to 2.18% on 5-shot classification, 2.53% on 1-shot classification, and up to 5.55% with different network architectures.
2 RELATED WORK
Few-Shot Learning. There are a number of different lines of research dedicated to FSL. (1) Metricbased methods employ a certain distance function (cosine distance (Mangla et al., 2020; Xu et al., 2021), Euclidean distance (Wang et al., 2019; Snell et al., 2017), or Earth Mover’s Distance (Zhang et al., 2020a;b)) to bypass the optimization and avoid possible overfitting. (2) Optimization-based approaches (Finn et al., 2017) manages to learn a good model initialization that accelerates the optimization in the meta-testing stage. (3) Self-supervised-based (Zhang et al., 2021b; Mangla et al., 2020) methods incorporate supervision from data itself to learn a robuster feature extractor. (4) Ensemble method is another powerful technique that boosting the performance by integrating multiple models (Ma et al., 2021a). For example, Dvornik et al. (2019) trains several networks simultaneously and encourages robustness and cooperation among them. However, due to the high computation load of training deep models, this ensemble is restricted by the number of networks which is typically <20. In Liu et al. (2020c), instead, the ensemble consists of models learned at each epoch, which, may potentially limit the diversity of ensemble members.
Geometric Understanding of Deep Learning. The geometric structure of deep neural networks is first hinted at by Raghu et al. (2017) who reveals that piecewise linear activations subdivide input space into convex polytopes. Then, Balestriero et al. (2019) points out that the exact structure is a Power Diagram (Aurenhammer, 1987) which is subsequently applied upon recurrent neural network (Wang et al., 2018) and generative model (Balestriero et al., 2020). The Power/Voronoi Diagram subdivision, however, is not necessarily the optimal model for describing feature space. Recently, Chen et al. (2013; 2017); Huang et al. (2021) uses an influence function F (C, z) to measure the joint influence of all objects in C on a query z to build a Cluster-induced Voronoi Diagram (CIVD). In this paper, we utilize CIVD to magnify the expressivity of geometric modeling for FSL.
3 METHODOLOGY
3.1 PRELIMINARIES
Few-shot learning aims at discriminating between novel classes Cnovel with the aid of a larger amount of samples from base classes Cbase, Cnovel∩Cbase = ∅. The whole learning process usually follows the
meta-learning scheme. Formally, given a dataset of base classes D = {(xi, yi)},xi ∈ D, yi ∈ Cbase with D being an arbitrary domain e.g. natural image, a deep neural network z = φ(x), z ∈ Rn, which maps from image domain D to feature domain Rn, is trained using standard gradient descent algorithm, and after which φ is fixed as a feature extractor. This process is referred to as metatraining stage that squeezes out the commonsense knowledge from D. For a fair evaluation of the learning performance on a few samples, the meta-testing stage is typically formulated as a series of K-way N -shot tasks (episodes) {T }. Each such episode is further decomposed into a support set S = {(xi, yi)}K×Ni=1 , yi ∈ CT and a query set Q = {(xi, yi)} K×Q i=1 , yi ∈ CT , in which the episode classes CT is a randomly sampled subset of Cnovel with cardinality K, and each class contains onlyN andQ random samples in the support set and query set, respectively. For few-shot classification, we introduce here two widely used schemes as follows. For simplicity, all samples here are from S and Q, without data augmentation applied. Nearest Neighbor Classifier (Nonparametric). In Snell et al. (2017); Wang et al. (2019) etc., a prototype ck is acquired by averaging over all supporting features for a class k ∈ CT :
ck = 1
N
∑ x∈S,y=k φ(x) (1)
Then each query sample x ∈ Q is classified by finding the nearest prototype: ŷ = arg minkd(z, ck) = ||z − ck||22, in which we use Euclidean distance for distance metric d. Linear Classifier (Parametric). Another scheme uses a linear classifier with cross-entropy loss optimized on the supporting samples:
L(W , b) = ∑ (x,y)∈S − log p(y|φ(x);W , b) = ∑
(x,y)∈S − log exp(W Ty φ(x) + by)∑ k exp(W T k φ(x) + bk) (2)
in which Wk, bk are the linear weight and bias for class k, and the predicted class for query x ∈ Q is ŷ = arg maxk p(y|z;Wk, bk).
3.2 FEW-SHOT LEARNING AS CLUSTER-INDUCED VORONOI DIAGRAMS
In this section, we first introduce the basic concepts of Voronoi Tessellations, and then show how parametric/nonparametric classifier heads can be unified by VD.
Definition 3.1 (Power Diagram and Voronoi Diagram). Let Ω = {ω1, ..., ωK} be a partition of the space Rn, and C = {c1, ..., cK} be a set of centers such that ∪Kr=1ωr = Rn,∩Kr=1ωr = ∅. Additionally, each center is associated with a weight νr ∈ {ν1, ..., νK} ⊆ R+. Then the set of pairs {(ω1, c1, ν1), ..., (ωK , cL, νK)} is a Power Diagram (PD), where each cell is obtained via ωr = {z ∈ Rn : r(z) = r}, r ∈ {1, ..,K}, with
r(z) = arg min k∈{1,...,K}
d(z, ck) 2 − νk. (3)
If the weights are equal for all k, i.e. νk = νk′ ,∀k, k′ ∈ {1, ...,K}, then a PD collapses to a Voronoi Diagram (VD).
By definition, it is easy to see that the nearest neighbor classifier naturally partitions the space into K cells with centers {c1, ..., cK}. Here we show that the linear classifier is also a VD under a mild condition.
Theorem 3.1 (Voronoi Diagram Reduction). The linear classifier parameterized by W , b partitions the input space Rn to a Voronoi Diagram with centers {c̃1, ..., c̃K} given by c̃k = 12Wk if bk = − 14 ||Wk|| 2 2, k = 1, ...,K.
Proof. See Appendix B for details.
3.2.1 FROM VORONOI DIAGRAM TO CLUSTER-INDUCED VORONOI DIAGRAM
Now that both nearest neighbor and linear classifier have been unified by VD, a natural idea is to integrate them together. Cluster-induced Voronoi Diagram (CIVD) (Chen et al., 2017; Huang et al.,
2021) is a generalization of VD which allows multiple centers in a cell, and is successfully used for clinical diagnosis from biomedical images (Wang et al., 2015), providing an ideal tool for the integration of parametric/nonparametric classifier for FSL. Formally: Definition 3.2 (Cluster-induced Voronoi Diagram (CIVD) (Chen et al., 2017; Huang et al., 2021)). Let Ω = {ω1, ..., ωK} be a partition of the space Rn, and C = {C1, ..., CK} be a set (possibly a multiset) of clusters. The set of pairs {(ω1, C1), ..., (ωK , CK)} is a Cluster-induced Voronoi Diagram (CIVD) with respect to the influence function F (Ck, z), where each cell is obtained via ωr = {z ∈ Rn : r(z) = r}, r ∈ {1, ..,K}, with
r(z) = arg max k∈{1,...,K}
F (Ck, z). (4)
Here C can be either a given set of clusters or even the whole power set of a given point set, and the influence function is defined as a function over the collection of distances from each member in a cluster Ck to a query point z: Definition 3.3 (Influence Function). The influence from Ck, k ∈ {1, ...,K} to z /∈ Ck is F (Ck, z) = F ({d(c(i)k , z)|c (i) k ∈ Ck} |Ck| i=1). In this paper F is assumed to have the following form
F (Ck, z) = − sign(α) ∑|Ck| i=0 d(c (i) k , z) α. (5)
The sign function here makes sure that F is a monotonically decreasing function with respect to distance d. The hyperparameter α controls the magnitude of the influence, for example, in gravity force α = −(n− 1) in n-dimensional space and in electric force α = −2. Since the nearest neighbor centers {ck}Kk=1 and the centers introduced by linear classifier {c̃k}Kk=1 are obtained from different schemes and could both be informative, we merge the corresponding centers for a novel class k to be a new cluster Ck = {ck, c̃k}, and use the resulting C = {C1, ..., CK} to establish a CIVD. In such a way, the final partition may enjoy the advantages of both parametric and nonparametric classifier heads. We name this approach as DeepVoro--.
3.3 FEW-SHOT CLASSIFICATION VIA SURROGATE REPRESENTATION
In nearest neighbor classifier head, the distance from a query feature z to each of the prototypes {ck}Kk=1 is the key discrimination criterion for classification. We rewrite {d(z, ck)}Kk=1 to be a vector d ∈ RK such that dk = d(z, ck). These distances are acquired by measure the distance between two points in high dimension: z, ck ∈ Rn. However, the notorious behavior of high dimension is that the ratio between the nearest and farthest points in a point set P approaches 1 (Aggarwal et al., 2001), making {d(z, ck)}Kk=1 less discriminative for classification, especially for FSL problem with sample size N ·K n. Hence, in this paper, we seek for a surrogate representation. In human perception and learning system, similarity among familiar and unfamiliar objects play a key role for object categorization and classification (Murray et al., 2002), and it has been experimentally verified by functional magnetic resonance imaging (fMRI) that a large region in occipitotemporal cortex processes the shape of both meaningful and unfamiliar objects (Op de Beeck et al., 2008). In our method, a connection will be built between each unfamiliar novel class in Cnovel and each related well-perceived familiar class in Cbase. So the first step is to identify the most relevant base classes for a specific task T . Concretely: Definition 3.4 (Surrogate Classes). In episode T , given the set of prototypes {ck}Kk=1 for the support set S and the set of prototypes {c′t} |Cbase| t=1 for the base set D, the surrogate classes for episode classes CT is given as:
Csurrogate(T ) = K⋃ k=1 Top-R t∈{1,...,|Cbase|} d(ck, c ′ t) (6)
in which the top-R function returnsR base class indices with smallest distances to ck, and the center for a base class t is given as c′t = 1 |{(x,y)|x∈D,y=t}| ∑ x∈D,y=tφ(x). Here R is a hyperparameter.
The rationale behind this selection instead of simply using the whole base classes Cbase is that, the episode classes CT are only overlapped with a portion of base classes (Zhang et al., 2021a), and
discriminative similarities are likely to be overwhelmed by the background signal especially when the number of base classes is large. After the surrogate classes are found, we re-index their feature centers to be {c′j}R̃j=1, R̃ ≤ R · K. Then, both support centers {ck}Kk=1 and query feature z are represented by the collection of similarities to these surrogate centers:
d′k = (d(ck, c ′ 1), ..., d(ck, c ′ R̃ )), k = 1, ...,K
d′ = (d(z, c′1), ..., d(z, c ′ R̃
)) (7)
where d′k,d ′ ∈ RR̃ are the surrogate representation for novel class k and query feature z, respectively. By surrogate representation, the prediction is found through ŷ = arg minkd(d ′,d′k) = arg mink||d′ − d′k||22. This set of discriminative distances is rewritten as d′′ ∈ RK such that d′′k = d(d
′,d′k). An illustration of the surrogate representation is shown in Figure 1 on MultiDigitMNIST, a demonstrative dataset.
Integrating Feature Representation and Surrogate Representation. Until now, we have two discriminative systems, i.e., feature-based d ∈ RK and surrogate-based d′′ ∈ RK . A natural idea is to combine them to form the following final criterion:
d̃ = β d
||d||1 + γ
d′′
||d′′||1 , (8)
where d and d′′ are normalized by their Manhattan norm, ||d||1 = ∑K k=1dk and ||d′′||1 = ∑K k=1d ′′ k , respectively, and β and γ are two hyperparameters adjusting the weights for feature representation and surrogate representation.
3.4 DEEPVORO: INTEGRATING MULTI-LEVEL HETEROGENEITY OF FSL
In this section we present DeepVoro, a fast geometric ensemble framework that unites our contributions to multiple stages of FSL, and show how it can be promoted to DeepVoro++ by incorporating surrogate representation.
Compositional Feature Transformation. It is believed that FSL algorithms favor features with more Gaussian-like distributions, and thus various kinds of transformations are used to improve the normality of feature distribution, including power transformation (Hu et al., 2021), Tukey’s Ladder of Powers Transformation (Yang et al., 2021), and L2 normalization (Wang et al., 2019). While these transformations are normally used independently, here we propose to combine several transformations sequentially in order to enlarge the expressivity of transformation function and to increase the polymorphism of the FSL process. Specifically, for a feature vector z, three kinds of transformations are considered: (I) L2 Normalization. By projection onto the unit sphere in Rn, the feature is normalized as: f(z) = z||z||2 . (II) Linear Transformation. Now since all the features are located on the unit sphere, we then can do scaling and shifting via a linear transformation: gw,b(z) = wz + b. (III) Tukey’s Ladder of Powers Transformation. Finally, Tukey’s Ladder of Powers Transformation
is applied on the feature: hλ(z) = { zλ if λ 6= 0 log(z) if λ = 0 . By the composition of L2 normalization, linear transformation, and Tukey’s Ladder of Powers Transformation, now the transformation function becomes (hλ ◦ gw,b ◦ f)(z) parameterized by w, b, λ. Multi-level Heterogeneities in FSL. Now we are ready to articulate the hierarchical heterogeneity existing in different stages of FSL. (I) Feature-level Heterogeneity: Data augmentation has been exhaustively explored for expanding the data size of FSL (Ni et al., 2021), including but not limited to rotation, flipping, cropping, erasing, solarization, color jitter, MixUp (Zhang et al., 2017), etc. The modification of image x will change the position of feature z in the feature space. We denote all possible translations of image as a set of functions {T}. (II) Transformation-level Heterogeneity: After obtaining the feature z, a parameterized transformation is applied to it, and the resulting features can be quite different for these parameters (see Figure F.1). We denote the set of all possible transformations to be {Pw,b,λ}. (III) Geometry-level Heterogeneity: Even with the provided feature, the few-shot classification model can still be diverse: whether a VD or PD-based model is used, whether the feature or the surrogate representation is adopted, and the setting of R will also change the degree of locality. We denote all possible models as {M}.
DeepVoro for Fast Geometric Ensemble of VDs. With the above three-layer heterogeneity, the FSL process can be encapsulated as (M◦Pw,b,λ◦φ◦T )(x), and all possible configurations of a given episode T with a fixed φ is the Cartesian product of these three sets: {T}×{Pw,b,λ}×{M}. Indeed, when a hold-out validation dataset is available, it can be used to find the optimal combination, but by virtue of ensemble learning, multiple models can still contribute positively to FSL (Dvornik et al., 2019). Since the cardinality of the resulting configuration set could be very large, the FSL model M as well as the ensemble algorithm is required to be highly efficient. The VD is a nonparametric model and no training is needed during the meta-testing stage, making it suitable for fast geometric ensemble. While CIVD models the cluster-to-point relationship via an influence function, here we further extend it so that cluster-to-cluster relationship can be considered. This motivates us to define Cluster-to-cluster Voronoi Diagram (CCVD): Definition 3.5 (Cluster-to-cluster Voronoi Diagram). Let Ω = {ω1, ..., ωK} be a partition of the space Rn, and C = {C1, ..., CK} be a set of totally ordered sets with the same cardinality L (i.e. |C1| = |C2| = ... = |CK | = L). The set of pairs {(ω1, C1), ..., (ωK , CK)} is a Cluster-to-cluster Voronoi Diagram (CCVD) with respect to the influence function F (Ck, C(z)), and each cell is obtained via ωr = {z ∈ Rn : r(z) = r}, r ∈ {1, ..,K}, with
r(z) = arg max k∈{1,...,K}
F (Ck, C(z)) (9)
where C(z) is the cluster (also a totally ordered set with cardinality L) that query point z belongs, which is to say, all points in this cluster (query cluster) will be assigned to the same cell. Similarly, the Influence Function is defined upon two totally ordered sets Ck = {c(i)k }Li=1 and C(z) = {z(i)}Li=1:
F (Ck, C(z)) = − sign(α) ∑L i=0 d(c (i) k , z (i))α. (10)
With this definition, now we are able to streamline our aforementioned novel approaches into a single ensemble model. Suppose there are totally L possible settings in our configuration pool {T} × {Pw,b,λ} × {M}, for all configurations {ρi}Li=1, we apply them onto the support set S to generate the K totally ordered clusters {{c(ρi)k }Li=1}Kk=1 including each center c (ρi) k derived through configuration ρi, and onto a query sample x to generate the query cluster C(z) = {z(ρ1), ...,z(ρL)}, and then plug these two into Definition 3.5 to construct the final Voronoi Diagram.
When only the feature representation is considered in the configuration pool, i.e. ρi ∈ {T} × {Pw,b,λ}, our FSL process is named as DeepVoro; if surrogate representation is also incorporated, i.e. ρi ∈ {T} × {Pw,b,λ} × {M}, DeepVoro is promoted to DeepVoro++ that allows for higher geometric diversity. See Appendix A for a summary of the notations and acronyms
4 EXPERIMENTS
The main goals of our experiments are to: (1) validate the strength of CIVD to integrate parametric and nonparametric classifiers and confirm the necessity of Voronoi reduc-
tion; (2) investigate how different levels of heterogeneity individually or collaboratively contribute to the overall result, and compare them with the state-of-art method; (3) reanalyze this ensemble when the surrogate representation comes into play, and see how it could ameliorate the extreme shortage of support samples. See Table 2 for a summary and Appendix D for the detailed descriptions of mini-ImageNet (Vinyals et al., 2016), CUB (Welinder et al., 2010), and tiered-ImageNet (Ren et al., 2018), that are used in this paper.
DeepVoro--: Integrating Parametric and Nonparametric Methods via CIVD. To verify our proposed CIVD model for the integration of parameter/nonparametric FSL classifiers, we first run three standalone models: Logistic Regressions with Power/Voronoi Diagrams as the underlining geometric structure (Power-LR/Voronoi-LR), and vanilla Voronoi Diagram (VD, i.e. nearest neighbor model), and then integrate VD with either Power/Voronoi-LR (see Appendix E for details). Interestingly, VD with the Power-LR has never reached the best result, suggesting that ordinary LR cannot
be integrated with VD due to their intrinsic distinct geometric structures. After the proposed Voronoi reduction (Theorem 3.1), however, VD+Voronoi-LR is able to improve upon both models in most cases, suggesting that CIVD can ideally integrate parameter and nonparametric models for better FSL.
DeepVoro: Improving FSL by Hierarchical Heterogeneities. In this section, we only consider two levels of heterogeneities for ensemble: feature-level and transformation-level. For feature-level ensemble, we utilize three kinds of image augmentations: rotation, flipping, and central cropping summing up to 64 distinct ways of data augmentation (Appendix F). For transformation-level ensemble, we use the proposed compositional transformations with 8 different combinations of λ and b that encourage a diverse feature transformations (Appendix F.1) without loss of accuracy (Figure 2). The size of the resulting configuration pool becomes 512 and DeepVoro’s performance is shown in Table 3. Clearly, DeepVoro outperforms all previous methods especially on 5-way 5-shot FSL. Specifically, DeepVoro is better than the next best by 2.18% (than Ni et al. (2021)) on miniImageNet, by 1.47% (than Hu et al. (2021)) on CUB, and by 1.02% (than Yang et al. (2021)) on tiered-ImageNet. Note that this is an estimated improvement because not all competitive methods here are tested with the same random seed and the number of episodes. More detailed results can be found in Appendix F. By virtue of CCVD and using the simplest VD as the building block, DeepVoro is arguably able to yield a consistently better result by the ensemble of a massive pool of independent VD. DeepVoro also exhibits high resistance to outliers, as shown in Figure K.16.
DeepVoro++: Further Improvement of FSL via Surrogate Representation. In surrogate representation, the number of neighbors R for each novel class and the weight β balancing surrogate/feature representations are two hyperparameters. With the help of an available validation set, a natural question is that whether the hyperparameter can be found through the optimization on the validation set, which requires a good generalization of the hyperparameters across different novel classes. From Figure K.13, the accuracy of VD with varying hyperparameter shows a good agreement between testing and validation sets. With this in mind, we select 10 combinations of β and R, guided by the validation set, in conjunction with 2 different feature transformations and 64 different image augmentations, adding up to a large pool of 1280 configurations for ensemble (denoted by DeepVoro++). As shown in Table 3, DeepVoro++ achieves best results for 1-shot FSL — 2.53%
Methods Geometric Structures Feat. Trans. Geo. L 5-way 1-shot 5-way 5-shot
A. mini-ImageNet 5-way 5-shot
B. mini-ImageNet 5-way 1-shot
C. CUB 5-way 5-shot
D. CUB 5-way 1-shot
higher than Zhang et al. (2020b), 2.38% higher than Hu et al. (2021), and 1.09% higher than Zhang et al. (2020b), on three datasets, respectively, justifying the efficacy of our surrogate representation. See Appendix G for more detailed analysis.
Ablation Experiments and Running Time. Table 4 varies the level of heterogeneity (see Table F.4 and G.5 for all datasets). The average accuracy of VDs without CCVD integration is marked by ], and is significantly lower than the fully-fledged ensemble. Table 5 presents the running time of DeepVoro(++) benchmarked in a 20-core Intel© CoreTM i7 CPU with NumPy (v1.20.3), whose efficiency is comparable to DC/S2M2 2, even with >1000× diversity.
Experiments with Different Backbones, Meta-training Protocols, and Domains. Because different feature extraction backbones, meta-training losses, and degree of discrepancy between the source/target domains will all affect the downstream FSL, we here examine the robustness of DeepVoro/DeepVoro++ under a number of different circumstances, and details are shown in Appendices H, I, J. Notably, DeepVoro/DeepVoro++ attains the best performance by up to 5.55%, and is therefore corroborated as a superior method for FSL, regardless of the backbone, training loss, or domain.
5 CONCLUSION
In this paper, our contribution is threefold. We first theoretically unify parametric and nonparametric few-shot classifiers into a general geometric framework (VD) and show an improved result by virtue of this integration (CIVD). By extending CIVD to CCVD, we present a fast geometric ensemble method (DeepVoro) that takes into consideration thousands of FSL configurations with high efficiency. To deal with the extreme data insufficiency in one-shot learning, we further propose a novel surrogate representation which, when incorporated into DeepVoro, promotes the performance of one-shot learning to a higher level (DeepVoro++). In future studies, we plan to extend our geometric approach to meta-learning-based FSL and lifelong FSL.
ACKNOWLEDGMENTS
This research was supported in part by NSF through grant IIS-1910492.
REPRODUCIBILITY STATEMENT
Our code as well as data split, random seeds, hyperparameters, scripts for reproducing the results in the paper are available at https://github.com/horsepurve/DeepVoro.
A NOTATIONS AND ACRONYMS
Parameters for feature-level , transformation-level , and geometry-level heterogeneity are shown
in yellow , blue , and red , respectively. See Sec. F for implementation details. †Here PD is reduced to VD by Theorem 3.1. ‡For every λ (or R), the b (or β) value with the highest validation accuracy is introduced into the configuration pool.
Methods GeometricStructures Centers Tunable Param. # Description
DeepVoro-- CIVD Ck = {ck, c̃k} ck from VD c̃k from PD†
− − −
DeepVoro CCVD Ck = {c (ρi) k }Li=1
ρi ∈ {T} × {Pw,b,λ}
angle of rotation 4 − flipping or not 2 − scaling & cropping 8 − w = 1 − scale factor in linear transformation b 4 shift factor in linear transformation λ 2 exponent in powers transformation
#configurations L = 512
DeepVoro++ CCVD Ck = {c (ρi) k }Li=1
ρi ∈ {T} × {Pw,b,λ} × {M}
angle of rotation 4 − flipping or not 2 − scaling & cropping 8 − w = 1 − scale factor in linear transformation b 1‡ shift factor in linear transformation λ 2 exponent in powers transformation R 10 the number of top-R nearest baseprototypes for a novel prototype γ = 1 − weight for surrogate representation β 1‡ weight for feature representation
#configurations L = 1280
B POWER DIAGRAM SUBDIVISION AND VORONOI REDUCTION
B.1 PROOF OF THEOREM 3.1
Lemma B.1. The vertical projection from the lower envelope of the hyperplanes {Πk(z) : W Tk z+ bk}Kk=1 onto the input space Rn defines the cells of a PD.
Theorem 3.1 (Voronoi Diagram Reduction). The linear classifier parameterized by W , b partitions the input space Rn to a Voronoi Diagram with centers {c̃1, ..., c̃K} given by c̃k = 12Wk if bk = − 14 ||Wk|| 2 2, k = 1, ...,K.
Proof. We first articulate Lemma B.1 and find the exact relationship between the hyperplane Πk(z) and the center of its associated cell in Rn. By Definition 3.1, the cell for a point z ∈ Rn is found by comparing d(z, ck)2 − νk for different k, so we define the power function p(z, S) expressing this value
p(z, S) = (z − u)2 − r2 (11)
in which S ⊆ Rn is a sphere with center u and radius r. In fact, the weight ν associated with a center in Definition 3.1 can be intepreted as the square of the radius r2. Next, let U denote a paraboloid y = z2, let Π(S) be the transform that maps sphere S with center u and radius r into hyperplane
Π(S) : y = 2z · u− u · u + r2. (12)
It can be proved that Π is a bijective mapping between arbitrary spheres in Rn and nonvertical hyperplanes in Rn+1 that intersect U (Aurenhammer, 1987). Further, let z′ denote the vertical projection of z onto U and z′′ denote its vertical projection onto Π(S), then the power function can be written as
p(z, S) = d(z, z′)− d(z, z′′), (13)
which implies the following relationships between a sphere in Rn and an associated hyperplane in Rn+1 (Lemma 4 in Aurenhammer (1987)): let S1 and S2 be nonco-centeric spheres in Rn, then the bisector of their Power cells is the vertical projection of Π(S1) ∩ Π(S2) onto Rn. Now, we have a direct relationship between sphere S, and hyperplane Π(S), and comparing equation (12) with the hyperplanes used in logistic regression {Πk(z) : W Tk z + bk}Kk=1 gives us
u = 1
2 Wk
r2 = bk + 1
4 ||Wk||22.
(14)
Although there is no guarantee that bk + 14 ||Wk|| 2 2 is always positive for an arbitrary logistic regression model, we can impose a constraint on r2 to keep it be zero during the optimization, which implies
bk = − 1
4 ||Wk||22. (15)
By this way, the radii for allK spheres become identical (all zero). After the optimization of logistic regression model, the centers { 12Wk} K k=1 will be used for CIVD integration.
C DETAILS ABOUT THE DEMONSTRATIVE EXAMPLE ON MULTIDIGITMNIST DATASET
MultiDigitMNIST (Sun, 2019) dataset is created by concatenating two (or three) digits of different classes from MNIST for few-shot image classification. Here we use DoubleMNIST Datasets (i.e. two digits in an image) consisting of 100 classes (00 to 09), 1000 images of size 64 × 64 × 1 per class, and the classes are further split into 64, 20, and 16 classes for training, testing, and validation, respectively. To better embed into the R2 space, we pick a ten-classes subset (00, 01, 12, 13, 04, 05, 06, 77, 08, and 09) as the base classes for meta-training, and another five-class subset (02, 49, 83, 17, and 36) for one episode. The feature extractor is a 4-layer convolutional network with an additional fully-connected layer for 2D embedding. In Figure 1 left panel, the VD is obtained by setting the centroid of each base class as the Voronoi center. For each novel class, the Voronoi center is simply the 1-shot support sample (Figure 1 central panel). The surrogate representation is computed as the collection of distances from a support/query sample to each of the base classes, as shown in Figure 1 right panel. Interestingly, the surrogate representations for a novel class, no matter if it is a support sample (dotted line) or a query sample (colored line) generally follow a certain pattern — akin within a class, distinct cross class — make them ideal surrogates for distinguishing between different novel classes. In our paper, we design a series of algorithms answering multiple questions regarding this surrogate representation: how to select base classes for the calculation of surrogate representation, how to combine it with feature representation, and how to integrate it into the overall ensemble workflow.
D MAIN DATASETS
For a fair and thorough comparison with previous works, three widely-adopted benchmark datasets are used throughout this paper.
(1) mini-ImageNet (Vinyals et al., 2016) is a shrunk subset of ILSVRC-12 (Russakovsky et al., 2015), consists of 100 classes in which 64 classes for training, 20 classes for testing and 16 classes for validation. Each class has 600 images of size 84× 84× 3. (2) CUB (Welinder et al., 2010) is another benchmark dataset for FSL, especially fine-grained FSL, including 200 species (classes) of birds. CUB is an unbalanced dataset with 58 images in average per class, also of size 84 × 84 × 3. We split all classes into 100 base classes, 50 novel classes, and 50 validation classes, following previous works (Chen et al., 2019a).
(3) tiered-ImageNet (Ren et al., 2018) is another subset of ILSVRC-12 (Russakovsky et al., 2015) but has more images, 779,165 images in total. All images are categorized into 351 base classes, 97 validation classes, and 160 novel classes. The number of images in each class is not always the same, 1281 in average. The image size is also 84× 84× 3.
E DEEPVORO--: INTEGRATING PARAMETRIC AND NONPARAMETRIC METHODS VIA CIVD
Table E.3: Cluster-induced Voronoi Diagram (CIVD) for the integration of parametric Logistic Regression (LR) and nonparametric nearest neighbor (i.e. Voronoi Diagram, VD) methods. The results from S2M2 R and DC are also included in this table but excluded for comparison. Best result is marked in bold.
Methods mini-Imagenet CUB tiered-ImageNet
5-way 1-shot 5-way 5-shot 5-way 1-shot 5-way 5-shot 5-way 1-shot 5-way 5-shot
S2M2 R 64.65 ± 0.45 83.20 ± 0.30 80.14 ± 0.45 90.99 ± 0.23 68.12 ± 0.52 86.71 ± 0.34 DC 67.79 ± 0.45 83.69 ± 0.31 79.93 ± 0.46 90.77 ± 0.24 74.24 ± 0.50 88.38 ± 0.31 Power-LR 65.45 ± 0.44 84.47 ± 0.29 79.66 ± 0.44 91.62 ± 0.22 73.57 ± 0.48 89.07 ± 0.29 Voronoi-LR 65.58 ± 0.44 84.51 ± 0.29 79.63 ± 0.44 91.61 ± 0.22 73.65 ± 0.48 89.15 ± 0.29 VD 65.37 ± 0.44 84.37 ± 0.29 78.57 ± 0.44 91.31 ± 0.23 72.83 ± 0.49 88.58 ± 0.29
CIVD-based DeepVoro--
VD + Power-LR 65.63 ± 0.44 84.25 ± 0.30 79.52 ± 0.43 91.52 ± 0.22 73.68 ± 0.48 88.71 ± 0.29 VD + Voronoi-LR 65.85 ± 0.43 84.66 ± 0.29 79.40 ± 0.44 91.57 ± 0.22 73.78 ± 0.48 89.02 ± 0.29
E.1 EXPERIMENTAL SETUP AND IMPLEMENTATION DETAILS
In this section, we first establish three few-shot classification models with different underlying geometric structures, two logistic regression (LR) models and one nearest neighbor model: (1) Power Diagram-based LR (Power-LR), (2) Voronoi Diagram-based LR (Voronoi-LR), and (3) Voronoi Diagram (VD). Then, the main purposes of our analysis are (1) to examine how the performance is affected by the proposed Voronoi Reduction method in Sec. 3.2, and (2) to inspect whether VD can be integrated with Power/Voronoi Diagram-based LRs.
The feature transformation used throughout this section is Pw,b,λ with w = 1.0, b = 0.0, λ = 0.5. For Power-LR, we train it directly on the transformedK-wayN -shot support samples using PyTorch library with an Adam optimizer with batch size at 64 and learning rate at 0.01. For Voronoi-LR, the vanilla LR is retrofitted as shown in Algorithm 1, in which the bias is given by Theorem 3.1 to make sure that the parameters induce a VD in each iteration.
In our CIVD model in Definition 3.2, we use a cluster instead of a single prototype to stand for a novel class. Here this cluster contains two points, i.e. Ck = {ck, c̃k}, in which ck is obtained from VD, and c̃k is acquired from Power-LR or Voronoi-LR. The question we intend to answer here is that whether Power-LR or Voronoi-LR is the suitable model for the integration.
Algorithm 1: Voronoi Diagram-based Logistic Regression. Data: Support Set S Result: W 1 Initialize W ←W (0); 2 for epoch← 1, ..., #epoch do 3 bk ← − 14 ||Wk|| 2 2,∀k = 1, ...,K ; / Apply Theorem 3.1 4 Compute loss L(W , b) ; / forward propagation 5 Update W ; / backward propagation 6 end 7 return W
40 20 0 20
60
40
20
0
20
40
60
80 A. No Transformation
40 20 0 20 60
40
20
0
20
40
60
80 B. L2 Normalization
100 50 0
40
20
0
20
40
C. Power Transformation
40 60 80
40
20
0
20
40
D. Log Transformation
Figure F.1: The t-SNE visualizations of (A) original features, (B) L2 normalization, (C) Tukey’s Ladder of Powers Transformation with λ = 0.5, and (D) compositional transformation with λ = 0, w = 1, b = 0.04 of 5 novel classes from mini-ImageNet dataset.
E.2 RESULTS
The results are shown in Table E.3. Interestingly, when integrated with VD, Power-LR never reaches the best result, suggesting that VD and LR are intrinsic different geometric models, and cannot be simply integrated together without additional effort. On mini-ImageNet and tiered-ImageNet datasets, the best results are achieved by either Voronoi-LR or VD+Voronoi-LR, showing that CIVD coupled with the proposed Voronoi reduction can ideally integrate parametric and nonparametric few-shot models. Notably, on these two datasets, when Power-LR is reduced to Voronoi-LR, although the number of parameters is decreased (b is directly given by Theorem 3.1, not involved in the optimization), the performance is always better, for example, increases from 65.45% to 65.58% on 5-way 1-shot mini-ImageNet data. On CUB dataset, results of different models are similar, probably because CUB is a fine-grained dataset and all classes are similar to each other (all birds).
F DEEPVORO: IMPROVING FSL VIA HIERARCHICAL HETEROGENEITIES
F.1 EXPERIMENTAL SETUP AND IMPLEMENTATION DETAILS
In this section we describe feature-level and transformation-level heterogeneities that are used for ensemble in order to improve FSL. See the next section for geometry-level heterogeneity.
Feature-level heterogeneity. Considering the reproducibility of the methodology, we only employ deterministic data augmentation upon the images without randomness involved. Specifically, three kinds of data augmentation techniques are used. (1) Rotation is an important augmentation method widely used in self-supervised learning (Mangla et al., 2020). Rotating the original images by 0°, 90°, 180°, and 270°gives us four ways of augmentation. (2) After rotation, we can flip the images horizontally, giving rise to additional two choices after each rotation degree. (3) Central cropping after scaling can alter the resolution and focus area of the image. Scaling the original images to (84+B)×(84+B),B increasing from 0 to 70 with step 10, bringing us eight ways of augmentation.
Finally, different combinations of the three types result in 64 kinds of augmentation methods (i.e. |{T}| = 64). Transformation-level heterogeneity. In our compositional transformation, the function (hλ◦gw,b◦ f)(z) is parameterized by w, b, λ. Since g is appended after the L2 normalization f , the vector comes into g is always a unit vector, so we simply set w = 1. For the different combinations of λ and b, we test different values with either λ = 0 or λ 6= 0 on the hold-out validation set (as shown in Figure 2 and K.12), and pick top-8 combinations with the best performance on the validation set.
Ensemble Schemes. Now, in our configuration pool {T} × {Pw,b,λ}, there are 512 possible configurations {ρ(i)}512i=1. For each ρ, we apply it on both the testing and the validation sets. With this large pool of ensemble candidates, how and whether to select a subset {ρ(i)}L′i=1 ⊆ {ρ(i)}512i=1 is still a nontrivial problem. Here we explore three different schemes. (1) Full (vanilla) ensemble. All candidates in {ρ(i)}512i=1 are taken into consideration and then plugged into Definition 3.5 to build the CIVD for space partition. (2) Random ensemble. A randomly selected subset with size L′ < L is used for ensemble. (3) Guided ensemble. We expect the performance for {ρ(i)}512i=1 on the validation set can be used to guide the selection of {ρ(i)}L′i=1 from the testing set, provided that there is good correlation between the testing set and the validation set. Specifically, we rank the configurations in the validation set with regard to their performance, and add them sequentially into {ρ(i)}L′i=1 until a maximum ensemble performance is reached on the validation set, then we use this configuration set for the final ensemble. Since VD is nonparametric and fast, we adopt VD as the building block and only use VD for each ρ for the remaining part of the paper. The α value in the influence function (Definition 3.3) is set at 1 throughout the paper, for the simplicity of computation.
For a fair comparison, we downloaded the trained models1 used by Mangla et al. (2020) and Yang et al. (2021). The performance of FSL algorithms is typically evaluated by a sequence of independent episodes, so the data split and random seed for the selection of novel classes as well as support/query set in each episode will all lead to different result. To ensure the fairness of our evaluation, DC (Yang et al., 2021), and S2M2 R (Mangla et al., 2020) are reevaluated with the same data split and random seed as DeepVoro. The results are obtained by running 2000 episodes and the average accuracy as well as 95% confidence intervals are reported.
F.2 RESULTS
Table F.4: Ablation study of DeepVoro’s performance with different levels of ensemble. The number of ensemble members are given in parentheses.
Methods Feature-level Transformation-level mini-ImageNet CUB tiered-ImageNet
1-shot 5-shot 1-shot 5-shot 1-shot 5-shot
No Ensemble 8 8 65.37 ± 0.44 84.37 ± 0.29 78.57 ± 0.44 91.31 ± 0.23 72.83 ± 0.49 88.58 ± 0.29 Vanilla Ensemble (8) 8 4 66.45 ± 0.44 84.55 ± 0.29 80.98 ± 0.44 91.47 ± 0.22 74.02 ± 0.49 88.90 ± 0.29 Vanilla Ensemble (64) 4 8 67.88 ± 0.45 86.39 ± 0.29 77.30 ± 0.43 91.26 ± 0.23 73.74 ± 0.49 88.67 ± 0.29 Vanilla Ensemble (512) 4 4 69.23 ± 0.45 86.70 ± 0.28 79.90 ± 0.43 91.70 ± 0.22 74.51 ± 0.48 89.11 ± 0.29 Random Ensemble (512) 4 4 69.30 ± 0.45 86.74 ± 0.28 80.40 ± 0.43 91.94 ± 0.22 74.64 ± 0.48 89.15 ± 0.29 Guided Ensemble (512) 4 4 69.48 ± 0.45 86.75 ± 0.28 82.99 ± 0.43 92.62 ± 0.22 74.98 ± 0.48 89.40 ± 0.29
Our proposed compositional transformation enlarges the expressivity of the transformation function. When the Tukey’s ladder of powers transformation is used individually, as reported in Yang et al. (2021), the optimal λ is not 0, but if an additional linear transformation g is inserted between f and h, λ = 0 coupled with a proper b can give even better result, as shown in Figure 2 and K.12. Importantly, from Figure 2, a combination of λ and b with good performance on the validation set can also produce satisfactory result on the testing set, suggesting that it is possible to optimize the hyperparameters on the validation set and generalize well on the testing set. In terms of the polymorphism induced by various transformations in the feature space, Figure F.1 exhibits the t-SNE visualizations of the original features and the features after three different kinds of transformations, showing that the relative positions of different novel classes is largely changes especially after compositional transformation (as shown in D). Besides commonly used data augmentation, this transformation provides another level of diversity that may be beneficial to the subsequent ensemble.
The results for different levels of ensemble are shown in Table F.4, in which the number of ensemble member are also indicated. Although transformation ensemble does not involve any change to the feature, it can largely improve the results for 1-shot FSL, from 65.37% to 66.45% on mini-ImageNet,
1downloaded from https://github.com/nupurkmr9/S2M2_fewshot
from 78.57% to 80.98% on CUB, and from 72.83% to 74.02% on tiered-ImageNet, respectively, probably because 1-shot FSL is more prone to overfitting due to its severe data deficiency. Featurelevel ensemble, on the other hand, is more important for 5-shot FSL, especially for mini-ImageNet. When combining the two levels together, the number of ensemble members increases to 512 and the performance significantly surpasses each individual level. On all three datasets, the guided ensemble scheme always achieves the best result for both single-shot and multi-shot cases, showing that the validation set can indeed be used for the guidance of subset selection and our method is robust cross classes in the same domain. When there is no such validation set available, the full ensemble and random ensemble schemes can also give comparable result.
To inspect how performance changes with different number of ensemble members, we exhibit the distribution of accuracy at three ensemble levels for mini-ImageNet in Figure F.2 and F.3 , for CUB in Figure F.4 and F.5, and for tiered-ImageNet in Figure F.6 and F.7. Figure (b) in each of them also exhibits the correlation between the testing and validation sets for all 512 configurations. Clearly, better result is often reached when there are more configurations for the ensemble, validating the efficacy of our method for improving the performance and robustness for better FSL. Algorithm 2: VD with Surrogate Representation for Episode T .
Data: Base classes D, Support Set S = {(xi, yi)}K×Ni=1 , yi ∈ CT , query sample x Result: d̃
1 D′ ← (Pw,b,λ ◦ φ ◦ T )(D) ; / Extract and transform feature 2 S ′ ← (Pw,b,λ ◦ φ ◦ T )(S); 3 z ← (Pw,b,λ ◦ φ ◦ T )(x); 4 for t← 1, ..., |Cbase|; / Compute prototypes of base classes 5 do 6 c′t ← 1|{(z′,y)|z′∈D′,y=t}| ∑ z′∈D′,y=tz ′ 7 end 8 for k ← 1, ...,K; / Compute prototypes from support samples 9 do
10 ck ← 1N ∑ z′∈S′,y=k z ′; 11 dk ← d(z, ck) 12 end 13 Csurrogate ← ∅; 14 for k ← 1, ...,K; / Find surrogate classes 15 do 16 Csurrogate ← Csurrogate ⋃ Top-R
t∈{1,...,|Cbase|} d(ck, c
′ t)
17 end 18 R̃← |Csurrogate|; 19 d′ ← (d(z, c′1), ..., d(z, c′R̃)) ; / Compute surrogate representation for query sample 20 for k ← 1, ...,K; / Compute surrogate representations for support samples 21 do 22 d′k ← (d(ck, c′1), ..., d(ck, c′R̃)); 23 d′′k ← d(d′,d′k) 24 end 25 d̃← β d||d||1 + γ d′′ ||d′′||1 ; / Compute final criterion 26 return d̃
1 2 3 4 5 6 7 8 Number of Ensemble Members
84.35
84.40
84.45
84.50
84.55
84.60
Ac cu
ra cy
mini-ImageNet transformation-level Ensemble Full Ensemble
(a) Transformation-level Ensemble
82 83 84 85 86 87 Testing Set Accuracy
85.0
85.5
86.0
86.5
87.0
87.5
88.0
88.5
Va lid
at io
n Se
t A cc
ur ac
y
Transformation-level Feature-level Dirichlet Tessellation Ensemble DC S2M2-R
mini-ImageNet 5-way 5-shot
Ensemble Ensemble
eepVoro
(b) Testing/Validation Sets Correlation
1 2 3 4 5 6 7 8 9 10111213141516171819202122232425262728293031323334353637383940414243444546474849505152535455565758596061626364 Number of Ensemble Members
84.0
84.5
85.0
85.5
86.0
86.5
Ac cu
ra cy
mini-ImageNet feature-level Ensemble
Full Ensemble
(c) Feature-level Ensemble on 5-way 5-shot mini-ImageNet Dataset
0 100 200 300 400 500 Number of Ensemble Members
84.5
85.0
85.5
86.0
86.5
Ac cu
ra cy
mini-ImageNet Dirichlet Tessellation Ensemble
Random Ensemble Guided Ensemble Full Ensemble
(d) DeepVoro on 5-way 5-shot mini-ImageNet Dataset
Figure F.2: Three levels of ensemble and the correlation between testing and validation sets with different configurations in the configuration pool.
1 2 3 4 5 6 7 8 Number of Ensemble Members
65.4
65.6
65.8
66.0
66.2
66.4
66.6
66.8
Ac cu
ra cy
mini-ImageNet transformation-level Ensemble
Full Ensemble
(a) Transformation-level Ensemble
63 64 65 66 67 68 69 Testing Set Accuracy
66
67
68
69
70
71
72
Va lid
at io
n Se
t A cc
ur ac
y
Transformation-level Feature-level Dirichlet Tessellation Ensemble DC S2M2-R
mini-ImageNet 5-way 1-shot
Ensemble Ensemble
eepVoro
(b) Testing/Validation Sets Correlation
1 2 3 4 5 6 7 8 9 10111213141516171819202122232425262728293031323334353637383940414243444546474849505152535455565758596061626364 Number of Ensemble Members
65.0
65.5
66.0
66.5
67.0
67.5
68.0
Ac cu
ra cy
mini-ImageNet feature-level Ensemble
Full Ensemble
(c) Feature-level Ensemble on 5-way 1-shot mini-ImageNet Dataset
0 100 200 300 400 500 Number of Ensemble Members
66.0
66.5
67.0
67.5
68.0
68.5
69.0
69.5
Ac cu
ra cy
mini-ImageNet Dirichlet Tessellation Ensemble
Random Ensemble Guided Ensemble Full Ensemble
(d) DeepVoro on 5-way 1-shot mini-ImageNet Dataset
Figure F.3: Three levels of ensemble and the correlation between testing and validation sets with different configurations in the configuration pool.
1 2 3 4 5 6 7 8 Number of Ensemble Members
90.6
90.8
91.0
91.2
91.4
91.6
Ac cu
ra cy
CUB transformation-level Ensemble Full Ensemble
(a) Transformation-level Ensemble
82 84 86 88 90 92 Testing Set Accuracy
80
82
84
86
88
90
92
Va lid
at io
n Se
t A cc
ur ac
y
Transformation-level Feature-level Dirichlet Tessellation Ensemble DC S2M2-R
CUB 5-way 5-shot
Ensemble Ensemble
eepVoro
(b) Testing/Validation Sets Correlation
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 Number of Ensemble Members
84
86
88
90
92
Ac cu
ra cy
CUB feature-level Ensemble Full Ensemble
(c) Feature-level Ensemble on 5-way 5-shot CUB Dataset
0 100 200 300 400 500 Number of Ensemble Members
86
88
90
92
Ac cu
ra cy
CUB Dirichlet Tessellation Ensemble
Random Ensemble Guided Ensemble Full Ensemble
(d) DeepVoro on 5-way 5-shot CUB Dataset
Figure F.4: Three levels of ensemble and the correlation between testing and validation sets with different configurations in the configuration pool.
1 2 3 4 5 6 7 8 Number of Ensemble Members
79.50
79.75
80.00
80.25
80.50
80.75
81.00
81.25
81.50
Ac cu
ra cy
CUB transformation-level Ensemble Full Ensemble
(a) Transformation-level Ensemble
62.5 65.0 67.5 70.0 72.5 75.0 77.5 80.0 82.5 Testing Set Accuracy
62.5
65.0
67.5
70.0
72.5
75.0
77.5
80.0
82.5
Va lid
at io
n Se
t A cc
ur ac
y
Transformation-level Feature-level Dirichlet Tessellation Ensemble DC S2M2-R
CUB 5-way 1-shot
Ensemble Ensemble
eepVoro
(b) Testing/Validation Sets Correlation
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 Number of Ensemble Members
68
70
72
74
76
78
80
Ac cu
ra cy
CUB feature-level Ensemble Full Ensemble
(c) Feature-level Ensemble on 5-way 1-shot CUB Dataset
0 100 200 300 400 500 Number of Ensemble Members
67.5
70.0
72.5
75.0
77.5
80.0
82.5
Ac cu
ra cy
CUB Dirichlet Tessellation Ensemble
Random Ensemble Guided Ensemble Full Ensemble
(d) DeepVoro on 5-way 1-shot CUB Dataset
Figure F.5: Three levels of ensemble and the correlation between testing and validation sets with different configurations in the configuration pool.
1 2 3 4 5 6 7 8 Number of Ensemble Members
88.65
88.70
88.75
88.80
88.85
88.90
88.95
Ac cu
ra cy
tiered-ImageNet transformation-level Ensemble
Full Ensemble
(a) Transformation-level Ensemble
85 86 87 88 89 90 Testing Set Accuracy
83
84
85
86
87
Va lid
at io
n Se
t A cc
ur ac
y
Transformation-level Feature-level Dirichlet Tessellation Ensemble DC S2M2-R
tiered-ImageNet 5-way 5-shot
Ensemble Ensemble
eepVoro
(b) Testing/Validation Sets Correlation
1 2 3 4 5 6 7 8 9 10111213141516171819202122232425262728293031323334353637383940414243444546474849505152535455565758596061626364 Number of Ensemble Members
86.5
87.0
87.5
88.0
88.5
89.0
Ac cu
ra cy
tiered-ImageNet feature-level Ensemble
Full Ensemble
(c) Feature-level Ensemble on 5-way 5-shot tiered-ImageNet Dataset
0 100 200 300 400 500 Number of Ensemble Members
85
86
87
88
89
Ac cu
ra cy
tiered-ImageNet Dirichlet Tessellation Ensemble
Random Ensemble Guided Ensemble Full Ensemble
(d) DeepVoro on 5-way 5-shot tiered-ImageNet Dataset
Figure F.6: Three levels of ensemble and the correlation between testing and validation sets with different configurations in the configuration pool.
1 2 3 4 5 6 7 8 Number of Ensemble Members
73.2
73.4
73.6
73.8
74.0
Ac cu
ra cy
tiered-ImageNet transformation-level Ensemble
Full Ensemble
(a) Transformation-level Ensemble
68 69 70 71 72 73 74 75 Testing Set Accuracy
65
66
67
68
69
70
71
72
73
Va lid
at io
n Se
t A cc
ur ac
y
Transformation-level Feature-level Dirichlet Tessellation Ensemble DC S2M2-R
tiered-ImageNet 5-way 1-shot
Ensemble Ensemble
eepVoro
(b) Testing/Validation Sets Correlation
1 2 3 4 5 6 7 8 9 10111213141516171819202122232425262728293031323334353637383940414243444546474849505152535455565758596061626364 Number of Ensemble Members
71.0
71.5
72.0
72.5
73.0
73.5
74.0
Ac cu
ra cy
tiered-ImageNet feature-level Ensemble
Full Ensemble
(c) Feature-level Ensemble on 5-way 1-shot tiered-ImageNet Dataset
0 100 200 300 400 500 Number of Ensemble Members
70
71
72
73
74
75
Ac cu
ra cy
tiered-ImageNet Dirichlet Tessellation Ensemble
Random Ensemble Guided Ensemble Full Ensemble
(d) DeepVoro on 5-way 1-shot tiered-ImageNet Dataset
Figure F.7: Three levels of ensemble and the correlation between testing and validation sets with different configurations in the configuration pool.
0 100 200 300 400 Number of Shots
65
70
75
80
85
90
Ac cu
ra cy
1: 65.31%
3: 80.12%
5: 84.05% 7: 85.95% 10: 87.60% 15: 88.91%20: 89.75%
40: 90.63% 100: 91.18% 200: 91.22% 400: 91.55%
Effect of the number of shots on mini-ImageNet dataset
Figure G.8: The accuracy of VD with increasing number of shots on mini-ImageNet dataset.
G DEEPVORO++: FURTHER IMPROVEMENT OF FSL VIA SURROGATE REPRESENTATION
G.1 EXPERIMENTAL SETUP AND IMPLEMENTATION DETAILS
In this section, we introduce another layer of heterogeneity, that is, geometry-level, that exists in our surrogate representation. In Definition 3.4, increasing R will enlarge the degree of locality when searching for the top-R surrogate classes. In equation (8), if we set γ = 1 then increasing β will make the model rely more on the feature representation and less on the surrogate representation. In order to weigh up R and β, we perform a grid search for different combinations of R and β on the validation set, as shown in Figure K.13, K.14, and K.15. For each R, we select the β that gives rise to the best result on the validation set, and use this (R, β) on the testing set, resulting in 10 such pairs in total. So there are 10 models in the geometry-level heterogeneity, standing for different degrees of locality. In conjunction with feature-level (64 kinds of augmentations) and transformation-level (here only the top-2 best transformations are used) heterogeneities, now there are 1280 different kinds of configurations in our configuration pool that will be used by the CCVD model. In conclusion, there are overall 512 + 1280 = 1792 configurations for a few-shot episode. Generating∼1800 ensemble candidates is nearly intractable for parametric methods like logistic regression or cosine classifier, which may consume e.g. months for thousands of episodes. However, the VD model is nonparametric and highly efficient, making it empirically possible to collect all the combinations and integrate them all together via CCVD. The complete algorithm for the computation of surrogate representation is shown in Algorithm 2.
G.2 RESULTS
The heatmaps for different (R, β) pairs on testing/validation sets are shown in Figure K.13 for miniImageNet, in Figure K.14 for CUB, and in Figure K.15 for tiered-ImageNet, respectively. Basically, the testing and validation set follow the same pattern. When R is small, i.e. only a small number of base classes are used for surrogate, then a higher weight should be placed on feature representation. With a fixed β, increasing R beyond a certain threshold will potentially cause a drop in accuracy, probably because the meaningful similarities is likely to be overwhelmed by the signals from the large volume of irrelevant base classes.
Table G.5: Ablation study of DeepVoro++’s performance with different levels of ensemble. The number of ensemble members are given in parentheses.
Methods Feature-level Transformation-level Geometry-level mini-ImageNet CUB tiered-ImageNet
No Ensemble 8 8 8 65.37 ± 0.44 78.57 ± 0.44 72.83 ± 0.49 Vanilla Ensemble (20) 8 4 4 68.38 ± 0.46 80 | 1. What is the focus of the paper regarding few-shot learning and Voronoi Diagrams?
2. What are the strengths of the proposed approach, particularly in its mathematical formulation and linking to Cluster-induced Voronoi Diagrams?
3. What are the weaknesses of the paper, such as lacking explanations or implementation details?
4. How does the reviewer assess the clarity and ease of understanding of the paper's content?
5. Are there any suggestions for future research or potential applications of the proposed technique? | Summary Of The Paper
Review | Summary Of The Paper
The paper presents an approach for few-shot (FS) learning using Voronoi Diagrams (VD). In particular, it relates the objectives of existing FS approaches to VD, and shows how Cluster-induced Voronoi Diagrams (CIVD), a variant of VD that allows multiple centers in a cell, can be used in for FS learning ensemble method (DeepVoro). Extensive quantitative evaluations show improvements over prior work on three public datasets.
Review
Strengths
Mathematical formulation of FS approaches in terms of VD terminology
Link to CIVD which allows ensembling different FS classifiers by means of a means of an influence functions that allows multistep
Extensive quantitative results on mini-ImageNet, CUB, tiered_ImageNet
Paper is well-written
Weaknesses
Mathematical derivation of FS as a tessellation is provided for nearest neighbor and linear classifiers, it would be helpful to understand if all FS classifiers have a VD explanation, or there are some classes of FS approaches that are not VD
The methodology section could be clarified if it has sections named after different contributions (for easy access)
Lack of explanation of DeepVoro and DeepVoro++
Lack of implementation details section
Lack of discussion how the ensemble approach using CIVD is related to meta-learning techniques
Small correction:
Typo: “To resolve this issie” -> To resolve this issue
Overall, the paper presents a mathematical link between VD and FS approaches, and shows how to use a multi-set VD diagram (CIVD) to unify multiple FS classifiers together. It is well written, and I found it easy to understand the main points of the paper. My main concern is the lack of details provided about the implementation, as I can see this approach would be useful in many future FS research techniques. It would also be valuable to add a discussion of how outliers would affect the method, computational complexity of the proposed technique. Can it be used for other tasks, such as segmentation? |
ICLR | Title
Few-shot Learning via Dirichlet Tessellation Ensemble
Abstract
Few-shot learning (FSL) is the process of rapid generalization from abundant base samples to inadequate novel samples. Despite extensive research in recent years, FSL is still not yet able to generate satisfactory solutions for a wide range of real-world applications. To confront this challenge, we study the FSL problem from a geometric point of view in this paper. One observation is that the widely embraced ProtoNet model is essentially a Voronoi Diagram (VD) in the feature space. We retrofit it by making use of a recent advance in computational geometry called Cluster-induced Voronoi Diagram (CIVD). Starting from the simplest nearest neighbor model, CIVD gradually incorporates cluster-to-point and then cluster-to-cluster relationships for space subdivision, which is used to improve the accuracy and robustness at multiple stages of FSL. Specifically, we use CIVD (1) to integrate parametric and nonparametric few-shot classifiers; (2) to combine feature representation and surrogate representation; (3) and to leverage feature-level, transformation-level, and geometry-level heterogeneities for a better ensemble. Our CIVD-based workflow enables us to achieve new state-of-the-art results on mini-ImageNet, CUB, and tiered-ImagenNet datasets, with ∼2%−5% improvements upon the next best. To summarize, CIVD provides a mathematically elegant and geometrically interpretable framework that compensates for extreme data insufficiency, prevents overfitting, and allows for fast geometric ensemble for thousands of individual VD. These together make FSL stronger.
1 INTRODUCTION
Recent years have witnessed a tremendous success of deep learning in a number of data-intensive applications; one critical reason for which is the vast collection of hand-annotated high-quality data, such as the millions of natural images for visual object recognition (Deng et al., 2009). However, in many real-world applications, such large-scale data acquisition might be difficult and comes at a premium, such as in rare disease diagnosis (Yoo et al., 2021) and drug discovery (Ma et al., 2021b; 2018). As a consequence, Few-shot Learning (FSL) has recently drawn growing interests (Wang et al., 2020).
Generally, few-shot learning algorithms can be categorized into two types, namely inductive and transductive, depending on whether estimating the distribution of query samples is allowed. A typical transductive FSL algorithm learns to propagate labels among a larger pool of query samples in a semi-supervised manner (Liu et al., 2019); notwithstanding its normally higher performance, in many real world scenarios a query sample (e.g. patient) also comes individually and is unique, for instance, in personalized pharmacogenomics (Sharifi-Noghabi et al., 2020). Thus, we in this paper adhere to the inductive setting and make on-the-fly prediction for each newly seen sample.
Few-shot learning is challenging and substantially different from conventional deep learning, and has been tackled by many researchers from a wide variety of angles. Despite the extensive research
All four authors are corresponding authors.
on the algorithmic aspects of FSL (see Sec. 2), two challenges still pose an obstacle to successful FSL: (1) how to sufficiently compensate for the data deficiency in FSL? and (2) how to make the most use of the base samples and the pre-trained model?
For the first question, data augmentation has been a successful approach to expand the size of data, either by Generative Adversarial Networks (GANs) (Goodfellow et al., 2014) (Li et al., 2020b; Zhang et al., 2018) or by variational autoencoders (VAEs) (Kingma & Welling, 2014) (Zhang et al., 2019; Chen et al., 2019b). However, in each way, the authenticity of either the augmented data or the feature is not guaranteed, and the out-of-distribution hallucinated samples (Ma et al., 2019) may hinder the subsequent FSL. Recently, Liu et al. (2020b) and Ni et al. (2021) investigate supportlevel, query-level, task-level, and shot-level augmentation for meta-learning, but the diversity of FSL models has not been taken into consideration. For the second question, Yang et al. (2021) borrows the top-2 nearest base classes for each novel sample to calibrate its distribution and to generate more novel samples. However, when there is no proximal base class, this calibration may utterly alter the distribution. Another line of work (Sbai et al., 2020; Zhou et al., 2020) learns to select and design base classes for a better discrimination on novel classes, which all introduce extra training burden. As a matter of fact, we still lack a method that makes full use of the base classes and the pretrained model effectively.
In this paper, we study the FSL problem from a geometric point of view. In metric-based FSL, despite being surprisingly simple, the nearest neighbor-like approaches, e.g. ProtoNet (Snell et al., 2017) and SimpleShot (Wang et al., 2019), have achieved remarkable performance that is even better than many sophisticatedly designed methods. Geometrically, what a nearest neighbor-based method does, under the hood, is to partition the feature space into a Voronoi Diagram (VD) that is induced by the feature centroids of the novel classes. Although it is highly efficient and simple, Voronoi Diagrams coarsely draw the decision boundary by linear bisectors separating two centers, and may lack the ability to subtly delineate the geometric structure arises in FSL.
To resolve this issue, we adopt a novel technique called Cluster-induced Voronoi Diagram (CIVD) (Chen et al., 2013; 2017; Huang & Xu, 2020; Huang et al., 2021), which is a recent breakthrough in computation geometry. CIVD generalizes VD from a point-to-point distance-based diagram to a cluster-to-point influence-based structure. It enables us to determine the dominating
region (or Voronoi cell) not only for a point (e.g. a class prototype) but also for a cluster of points, guaranteed to have a (1 + )-approximation with a nearly linear size of diagram for a wide range of locally dominating influence functions. CIVD provides us a mathematically elegant framework to depict the feature space and draw the decision boundary more precisely than VD without losing the resistance to overfitting.
Accordingly, in this paper, we show how CIVD is used to improve multiple stages of FSL and make several contributions as follows.
1. We first categorize different types of few-shot classifiers as different variants of Voronoi Diagram: nearest neighbor model as Voronoi Diagram, linear classifier as Power Diagram, and cosine classifier as spherical Voronoi Diagram (Table 1). We then unify them via CIVD that enjoys the advantages of multiple models, either parametric or nonparametric (denoted as DeepVoro--).
2. Going from cluster-to-point to cluster-to-cluster influence, we further propose Cluster-to-cluster Voronoi Diagram (CCVD), as a natural extension of CIVD. Based on CCVD, we present DeepVoro which enables fast geometric ensemble of a large pool of thousands of configurations for FSL.
3. Instead of using base classes for distribution calibration and data augmentation (Yang et al., 2021), we propose a novel surrogate representation, the collection of similarities to base classes, and thus promote DeepVoro to DeepVoro++ that integrates feature-level, transformation-level, and geometry-level heterogeneities in FSL.
Extensive experiments have shown that, although a fixed feature extractor is used without independently pretrained or epoch-wise models, our method achieves new state-of-the-art results on all
three benchmark datasets including mini-ImageNet, CUB, and tiered-ImageNet, and improves by up to 2.18% on 5-shot classification, 2.53% on 1-shot classification, and up to 5.55% with different network architectures.
2 RELATED WORK
Few-Shot Learning. There are a number of different lines of research dedicated to FSL. (1) Metricbased methods employ a certain distance function (cosine distance (Mangla et al., 2020; Xu et al., 2021), Euclidean distance (Wang et al., 2019; Snell et al., 2017), or Earth Mover’s Distance (Zhang et al., 2020a;b)) to bypass the optimization and avoid possible overfitting. (2) Optimization-based approaches (Finn et al., 2017) manages to learn a good model initialization that accelerates the optimization in the meta-testing stage. (3) Self-supervised-based (Zhang et al., 2021b; Mangla et al., 2020) methods incorporate supervision from data itself to learn a robuster feature extractor. (4) Ensemble method is another powerful technique that boosting the performance by integrating multiple models (Ma et al., 2021a). For example, Dvornik et al. (2019) trains several networks simultaneously and encourages robustness and cooperation among them. However, due to the high computation load of training deep models, this ensemble is restricted by the number of networks which is typically <20. In Liu et al. (2020c), instead, the ensemble consists of models learned at each epoch, which, may potentially limit the diversity of ensemble members.
Geometric Understanding of Deep Learning. The geometric structure of deep neural networks is first hinted at by Raghu et al. (2017) who reveals that piecewise linear activations subdivide input space into convex polytopes. Then, Balestriero et al. (2019) points out that the exact structure is a Power Diagram (Aurenhammer, 1987) which is subsequently applied upon recurrent neural network (Wang et al., 2018) and generative model (Balestriero et al., 2020). The Power/Voronoi Diagram subdivision, however, is not necessarily the optimal model for describing feature space. Recently, Chen et al. (2013; 2017); Huang et al. (2021) uses an influence function F (C, z) to measure the joint influence of all objects in C on a query z to build a Cluster-induced Voronoi Diagram (CIVD). In this paper, we utilize CIVD to magnify the expressivity of geometric modeling for FSL.
3 METHODOLOGY
3.1 PRELIMINARIES
Few-shot learning aims at discriminating between novel classes Cnovel with the aid of a larger amount of samples from base classes Cbase, Cnovel∩Cbase = ∅. The whole learning process usually follows the
meta-learning scheme. Formally, given a dataset of base classes D = {(xi, yi)},xi ∈ D, yi ∈ Cbase with D being an arbitrary domain e.g. natural image, a deep neural network z = φ(x), z ∈ Rn, which maps from image domain D to feature domain Rn, is trained using standard gradient descent algorithm, and after which φ is fixed as a feature extractor. This process is referred to as metatraining stage that squeezes out the commonsense knowledge from D. For a fair evaluation of the learning performance on a few samples, the meta-testing stage is typically formulated as a series of K-way N -shot tasks (episodes) {T }. Each such episode is further decomposed into a support set S = {(xi, yi)}K×Ni=1 , yi ∈ CT and a query set Q = {(xi, yi)} K×Q i=1 , yi ∈ CT , in which the episode classes CT is a randomly sampled subset of Cnovel with cardinality K, and each class contains onlyN andQ random samples in the support set and query set, respectively. For few-shot classification, we introduce here two widely used schemes as follows. For simplicity, all samples here are from S and Q, without data augmentation applied. Nearest Neighbor Classifier (Nonparametric). In Snell et al. (2017); Wang et al. (2019) etc., a prototype ck is acquired by averaging over all supporting features for a class k ∈ CT :
ck = 1
N
∑ x∈S,y=k φ(x) (1)
Then each query sample x ∈ Q is classified by finding the nearest prototype: ŷ = arg minkd(z, ck) = ||z − ck||22, in which we use Euclidean distance for distance metric d. Linear Classifier (Parametric). Another scheme uses a linear classifier with cross-entropy loss optimized on the supporting samples:
L(W , b) = ∑ (x,y)∈S − log p(y|φ(x);W , b) = ∑
(x,y)∈S − log exp(W Ty φ(x) + by)∑ k exp(W T k φ(x) + bk) (2)
in which Wk, bk are the linear weight and bias for class k, and the predicted class for query x ∈ Q is ŷ = arg maxk p(y|z;Wk, bk).
3.2 FEW-SHOT LEARNING AS CLUSTER-INDUCED VORONOI DIAGRAMS
In this section, we first introduce the basic concepts of Voronoi Tessellations, and then show how parametric/nonparametric classifier heads can be unified by VD.
Definition 3.1 (Power Diagram and Voronoi Diagram). Let Ω = {ω1, ..., ωK} be a partition of the space Rn, and C = {c1, ..., cK} be a set of centers such that ∪Kr=1ωr = Rn,∩Kr=1ωr = ∅. Additionally, each center is associated with a weight νr ∈ {ν1, ..., νK} ⊆ R+. Then the set of pairs {(ω1, c1, ν1), ..., (ωK , cL, νK)} is a Power Diagram (PD), where each cell is obtained via ωr = {z ∈ Rn : r(z) = r}, r ∈ {1, ..,K}, with
r(z) = arg min k∈{1,...,K}
d(z, ck) 2 − νk. (3)
If the weights are equal for all k, i.e. νk = νk′ ,∀k, k′ ∈ {1, ...,K}, then a PD collapses to a Voronoi Diagram (VD).
By definition, it is easy to see that the nearest neighbor classifier naturally partitions the space into K cells with centers {c1, ..., cK}. Here we show that the linear classifier is also a VD under a mild condition.
Theorem 3.1 (Voronoi Diagram Reduction). The linear classifier parameterized by W , b partitions the input space Rn to a Voronoi Diagram with centers {c̃1, ..., c̃K} given by c̃k = 12Wk if bk = − 14 ||Wk|| 2 2, k = 1, ...,K.
Proof. See Appendix B for details.
3.2.1 FROM VORONOI DIAGRAM TO CLUSTER-INDUCED VORONOI DIAGRAM
Now that both nearest neighbor and linear classifier have been unified by VD, a natural idea is to integrate them together. Cluster-induced Voronoi Diagram (CIVD) (Chen et al., 2017; Huang et al.,
2021) is a generalization of VD which allows multiple centers in a cell, and is successfully used for clinical diagnosis from biomedical images (Wang et al., 2015), providing an ideal tool for the integration of parametric/nonparametric classifier for FSL. Formally: Definition 3.2 (Cluster-induced Voronoi Diagram (CIVD) (Chen et al., 2017; Huang et al., 2021)). Let Ω = {ω1, ..., ωK} be a partition of the space Rn, and C = {C1, ..., CK} be a set (possibly a multiset) of clusters. The set of pairs {(ω1, C1), ..., (ωK , CK)} is a Cluster-induced Voronoi Diagram (CIVD) with respect to the influence function F (Ck, z), where each cell is obtained via ωr = {z ∈ Rn : r(z) = r}, r ∈ {1, ..,K}, with
r(z) = arg max k∈{1,...,K}
F (Ck, z). (4)
Here C can be either a given set of clusters or even the whole power set of a given point set, and the influence function is defined as a function over the collection of distances from each member in a cluster Ck to a query point z: Definition 3.3 (Influence Function). The influence from Ck, k ∈ {1, ...,K} to z /∈ Ck is F (Ck, z) = F ({d(c(i)k , z)|c (i) k ∈ Ck} |Ck| i=1). In this paper F is assumed to have the following form
F (Ck, z) = − sign(α) ∑|Ck| i=0 d(c (i) k , z) α. (5)
The sign function here makes sure that F is a monotonically decreasing function with respect to distance d. The hyperparameter α controls the magnitude of the influence, for example, in gravity force α = −(n− 1) in n-dimensional space and in electric force α = −2. Since the nearest neighbor centers {ck}Kk=1 and the centers introduced by linear classifier {c̃k}Kk=1 are obtained from different schemes and could both be informative, we merge the corresponding centers for a novel class k to be a new cluster Ck = {ck, c̃k}, and use the resulting C = {C1, ..., CK} to establish a CIVD. In such a way, the final partition may enjoy the advantages of both parametric and nonparametric classifier heads. We name this approach as DeepVoro--.
3.3 FEW-SHOT CLASSIFICATION VIA SURROGATE REPRESENTATION
In nearest neighbor classifier head, the distance from a query feature z to each of the prototypes {ck}Kk=1 is the key discrimination criterion for classification. We rewrite {d(z, ck)}Kk=1 to be a vector d ∈ RK such that dk = d(z, ck). These distances are acquired by measure the distance between two points in high dimension: z, ck ∈ Rn. However, the notorious behavior of high dimension is that the ratio between the nearest and farthest points in a point set P approaches 1 (Aggarwal et al., 2001), making {d(z, ck)}Kk=1 less discriminative for classification, especially for FSL problem with sample size N ·K n. Hence, in this paper, we seek for a surrogate representation. In human perception and learning system, similarity among familiar and unfamiliar objects play a key role for object categorization and classification (Murray et al., 2002), and it has been experimentally verified by functional magnetic resonance imaging (fMRI) that a large region in occipitotemporal cortex processes the shape of both meaningful and unfamiliar objects (Op de Beeck et al., 2008). In our method, a connection will be built between each unfamiliar novel class in Cnovel and each related well-perceived familiar class in Cbase. So the first step is to identify the most relevant base classes for a specific task T . Concretely: Definition 3.4 (Surrogate Classes). In episode T , given the set of prototypes {ck}Kk=1 for the support set S and the set of prototypes {c′t} |Cbase| t=1 for the base set D, the surrogate classes for episode classes CT is given as:
Csurrogate(T ) = K⋃ k=1 Top-R t∈{1,...,|Cbase|} d(ck, c ′ t) (6)
in which the top-R function returnsR base class indices with smallest distances to ck, and the center for a base class t is given as c′t = 1 |{(x,y)|x∈D,y=t}| ∑ x∈D,y=tφ(x). Here R is a hyperparameter.
The rationale behind this selection instead of simply using the whole base classes Cbase is that, the episode classes CT are only overlapped with a portion of base classes (Zhang et al., 2021a), and
discriminative similarities are likely to be overwhelmed by the background signal especially when the number of base classes is large. After the surrogate classes are found, we re-index their feature centers to be {c′j}R̃j=1, R̃ ≤ R · K. Then, both support centers {ck}Kk=1 and query feature z are represented by the collection of similarities to these surrogate centers:
d′k = (d(ck, c ′ 1), ..., d(ck, c ′ R̃ )), k = 1, ...,K
d′ = (d(z, c′1), ..., d(z, c ′ R̃
)) (7)
where d′k,d ′ ∈ RR̃ are the surrogate representation for novel class k and query feature z, respectively. By surrogate representation, the prediction is found through ŷ = arg minkd(d ′,d′k) = arg mink||d′ − d′k||22. This set of discriminative distances is rewritten as d′′ ∈ RK such that d′′k = d(d
′,d′k). An illustration of the surrogate representation is shown in Figure 1 on MultiDigitMNIST, a demonstrative dataset.
Integrating Feature Representation and Surrogate Representation. Until now, we have two discriminative systems, i.e., feature-based d ∈ RK and surrogate-based d′′ ∈ RK . A natural idea is to combine them to form the following final criterion:
d̃ = β d
||d||1 + γ
d′′
||d′′||1 , (8)
where d and d′′ are normalized by their Manhattan norm, ||d||1 = ∑K k=1dk and ||d′′||1 = ∑K k=1d ′′ k , respectively, and β and γ are two hyperparameters adjusting the weights for feature representation and surrogate representation.
3.4 DEEPVORO: INTEGRATING MULTI-LEVEL HETEROGENEITY OF FSL
In this section we present DeepVoro, a fast geometric ensemble framework that unites our contributions to multiple stages of FSL, and show how it can be promoted to DeepVoro++ by incorporating surrogate representation.
Compositional Feature Transformation. It is believed that FSL algorithms favor features with more Gaussian-like distributions, and thus various kinds of transformations are used to improve the normality of feature distribution, including power transformation (Hu et al., 2021), Tukey’s Ladder of Powers Transformation (Yang et al., 2021), and L2 normalization (Wang et al., 2019). While these transformations are normally used independently, here we propose to combine several transformations sequentially in order to enlarge the expressivity of transformation function and to increase the polymorphism of the FSL process. Specifically, for a feature vector z, three kinds of transformations are considered: (I) L2 Normalization. By projection onto the unit sphere in Rn, the feature is normalized as: f(z) = z||z||2 . (II) Linear Transformation. Now since all the features are located on the unit sphere, we then can do scaling and shifting via a linear transformation: gw,b(z) = wz + b. (III) Tukey’s Ladder of Powers Transformation. Finally, Tukey’s Ladder of Powers Transformation
is applied on the feature: hλ(z) = { zλ if λ 6= 0 log(z) if λ = 0 . By the composition of L2 normalization, linear transformation, and Tukey’s Ladder of Powers Transformation, now the transformation function becomes (hλ ◦ gw,b ◦ f)(z) parameterized by w, b, λ. Multi-level Heterogeneities in FSL. Now we are ready to articulate the hierarchical heterogeneity existing in different stages of FSL. (I) Feature-level Heterogeneity: Data augmentation has been exhaustively explored for expanding the data size of FSL (Ni et al., 2021), including but not limited to rotation, flipping, cropping, erasing, solarization, color jitter, MixUp (Zhang et al., 2017), etc. The modification of image x will change the position of feature z in the feature space. We denote all possible translations of image as a set of functions {T}. (II) Transformation-level Heterogeneity: After obtaining the feature z, a parameterized transformation is applied to it, and the resulting features can be quite different for these parameters (see Figure F.1). We denote the set of all possible transformations to be {Pw,b,λ}. (III) Geometry-level Heterogeneity: Even with the provided feature, the few-shot classification model can still be diverse: whether a VD or PD-based model is used, whether the feature or the surrogate representation is adopted, and the setting of R will also change the degree of locality. We denote all possible models as {M}.
DeepVoro for Fast Geometric Ensemble of VDs. With the above three-layer heterogeneity, the FSL process can be encapsulated as (M◦Pw,b,λ◦φ◦T )(x), and all possible configurations of a given episode T with a fixed φ is the Cartesian product of these three sets: {T}×{Pw,b,λ}×{M}. Indeed, when a hold-out validation dataset is available, it can be used to find the optimal combination, but by virtue of ensemble learning, multiple models can still contribute positively to FSL (Dvornik et al., 2019). Since the cardinality of the resulting configuration set could be very large, the FSL model M as well as the ensemble algorithm is required to be highly efficient. The VD is a nonparametric model and no training is needed during the meta-testing stage, making it suitable for fast geometric ensemble. While CIVD models the cluster-to-point relationship via an influence function, here we further extend it so that cluster-to-cluster relationship can be considered. This motivates us to define Cluster-to-cluster Voronoi Diagram (CCVD): Definition 3.5 (Cluster-to-cluster Voronoi Diagram). Let Ω = {ω1, ..., ωK} be a partition of the space Rn, and C = {C1, ..., CK} be a set of totally ordered sets with the same cardinality L (i.e. |C1| = |C2| = ... = |CK | = L). The set of pairs {(ω1, C1), ..., (ωK , CK)} is a Cluster-to-cluster Voronoi Diagram (CCVD) with respect to the influence function F (Ck, C(z)), and each cell is obtained via ωr = {z ∈ Rn : r(z) = r}, r ∈ {1, ..,K}, with
r(z) = arg max k∈{1,...,K}
F (Ck, C(z)) (9)
where C(z) is the cluster (also a totally ordered set with cardinality L) that query point z belongs, which is to say, all points in this cluster (query cluster) will be assigned to the same cell. Similarly, the Influence Function is defined upon two totally ordered sets Ck = {c(i)k }Li=1 and C(z) = {z(i)}Li=1:
F (Ck, C(z)) = − sign(α) ∑L i=0 d(c (i) k , z (i))α. (10)
With this definition, now we are able to streamline our aforementioned novel approaches into a single ensemble model. Suppose there are totally L possible settings in our configuration pool {T} × {Pw,b,λ} × {M}, for all configurations {ρi}Li=1, we apply them onto the support set S to generate the K totally ordered clusters {{c(ρi)k }Li=1}Kk=1 including each center c (ρi) k derived through configuration ρi, and onto a query sample x to generate the query cluster C(z) = {z(ρ1), ...,z(ρL)}, and then plug these two into Definition 3.5 to construct the final Voronoi Diagram.
When only the feature representation is considered in the configuration pool, i.e. ρi ∈ {T} × {Pw,b,λ}, our FSL process is named as DeepVoro; if surrogate representation is also incorporated, i.e. ρi ∈ {T} × {Pw,b,λ} × {M}, DeepVoro is promoted to DeepVoro++ that allows for higher geometric diversity. See Appendix A for a summary of the notations and acronyms
4 EXPERIMENTS
The main goals of our experiments are to: (1) validate the strength of CIVD to integrate parametric and nonparametric classifiers and confirm the necessity of Voronoi reduc-
tion; (2) investigate how different levels of heterogeneity individually or collaboratively contribute to the overall result, and compare them with the state-of-art method; (3) reanalyze this ensemble when the surrogate representation comes into play, and see how it could ameliorate the extreme shortage of support samples. See Table 2 for a summary and Appendix D for the detailed descriptions of mini-ImageNet (Vinyals et al., 2016), CUB (Welinder et al., 2010), and tiered-ImageNet (Ren et al., 2018), that are used in this paper.
DeepVoro--: Integrating Parametric and Nonparametric Methods via CIVD. To verify our proposed CIVD model for the integration of parameter/nonparametric FSL classifiers, we first run three standalone models: Logistic Regressions with Power/Voronoi Diagrams as the underlining geometric structure (Power-LR/Voronoi-LR), and vanilla Voronoi Diagram (VD, i.e. nearest neighbor model), and then integrate VD with either Power/Voronoi-LR (see Appendix E for details). Interestingly, VD with the Power-LR has never reached the best result, suggesting that ordinary LR cannot
be integrated with VD due to their intrinsic distinct geometric structures. After the proposed Voronoi reduction (Theorem 3.1), however, VD+Voronoi-LR is able to improve upon both models in most cases, suggesting that CIVD can ideally integrate parameter and nonparametric models for better FSL.
DeepVoro: Improving FSL by Hierarchical Heterogeneities. In this section, we only consider two levels of heterogeneities for ensemble: feature-level and transformation-level. For feature-level ensemble, we utilize three kinds of image augmentations: rotation, flipping, and central cropping summing up to 64 distinct ways of data augmentation (Appendix F). For transformation-level ensemble, we use the proposed compositional transformations with 8 different combinations of λ and b that encourage a diverse feature transformations (Appendix F.1) without loss of accuracy (Figure 2). The size of the resulting configuration pool becomes 512 and DeepVoro’s performance is shown in Table 3. Clearly, DeepVoro outperforms all previous methods especially on 5-way 5-shot FSL. Specifically, DeepVoro is better than the next best by 2.18% (than Ni et al. (2021)) on miniImageNet, by 1.47% (than Hu et al. (2021)) on CUB, and by 1.02% (than Yang et al. (2021)) on tiered-ImageNet. Note that this is an estimated improvement because not all competitive methods here are tested with the same random seed and the number of episodes. More detailed results can be found in Appendix F. By virtue of CCVD and using the simplest VD as the building block, DeepVoro is arguably able to yield a consistently better result by the ensemble of a massive pool of independent VD. DeepVoro also exhibits high resistance to outliers, as shown in Figure K.16.
DeepVoro++: Further Improvement of FSL via Surrogate Representation. In surrogate representation, the number of neighbors R for each novel class and the weight β balancing surrogate/feature representations are two hyperparameters. With the help of an available validation set, a natural question is that whether the hyperparameter can be found through the optimization on the validation set, which requires a good generalization of the hyperparameters across different novel classes. From Figure K.13, the accuracy of VD with varying hyperparameter shows a good agreement between testing and validation sets. With this in mind, we select 10 combinations of β and R, guided by the validation set, in conjunction with 2 different feature transformations and 64 different image augmentations, adding up to a large pool of 1280 configurations for ensemble (denoted by DeepVoro++). As shown in Table 3, DeepVoro++ achieves best results for 1-shot FSL — 2.53%
Methods Geometric Structures Feat. Trans. Geo. L 5-way 1-shot 5-way 5-shot
A. mini-ImageNet 5-way 5-shot
B. mini-ImageNet 5-way 1-shot
C. CUB 5-way 5-shot
D. CUB 5-way 1-shot
higher than Zhang et al. (2020b), 2.38% higher than Hu et al. (2021), and 1.09% higher than Zhang et al. (2020b), on three datasets, respectively, justifying the efficacy of our surrogate representation. See Appendix G for more detailed analysis.
Ablation Experiments and Running Time. Table 4 varies the level of heterogeneity (see Table F.4 and G.5 for all datasets). The average accuracy of VDs without CCVD integration is marked by ], and is significantly lower than the fully-fledged ensemble. Table 5 presents the running time of DeepVoro(++) benchmarked in a 20-core Intel© CoreTM i7 CPU with NumPy (v1.20.3), whose efficiency is comparable to DC/S2M2 2, even with >1000× diversity.
Experiments with Different Backbones, Meta-training Protocols, and Domains. Because different feature extraction backbones, meta-training losses, and degree of discrepancy between the source/target domains will all affect the downstream FSL, we here examine the robustness of DeepVoro/DeepVoro++ under a number of different circumstances, and details are shown in Appendices H, I, J. Notably, DeepVoro/DeepVoro++ attains the best performance by up to 5.55%, and is therefore corroborated as a superior method for FSL, regardless of the backbone, training loss, or domain.
5 CONCLUSION
In this paper, our contribution is threefold. We first theoretically unify parametric and nonparametric few-shot classifiers into a general geometric framework (VD) and show an improved result by virtue of this integration (CIVD). By extending CIVD to CCVD, we present a fast geometric ensemble method (DeepVoro) that takes into consideration thousands of FSL configurations with high efficiency. To deal with the extreme data insufficiency in one-shot learning, we further propose a novel surrogate representation which, when incorporated into DeepVoro, promotes the performance of one-shot learning to a higher level (DeepVoro++). In future studies, we plan to extend our geometric approach to meta-learning-based FSL and lifelong FSL.
ACKNOWLEDGMENTS
This research was supported in part by NSF through grant IIS-1910492.
REPRODUCIBILITY STATEMENT
Our code as well as data split, random seeds, hyperparameters, scripts for reproducing the results in the paper are available at https://github.com/horsepurve/DeepVoro.
A NOTATIONS AND ACRONYMS
Parameters for feature-level , transformation-level , and geometry-level heterogeneity are shown
in yellow , blue , and red , respectively. See Sec. F for implementation details. †Here PD is reduced to VD by Theorem 3.1. ‡For every λ (or R), the b (or β) value with the highest validation accuracy is introduced into the configuration pool.
Methods GeometricStructures Centers Tunable Param. # Description
DeepVoro-- CIVD Ck = {ck, c̃k} ck from VD c̃k from PD†
− − −
DeepVoro CCVD Ck = {c (ρi) k }Li=1
ρi ∈ {T} × {Pw,b,λ}
angle of rotation 4 − flipping or not 2 − scaling & cropping 8 − w = 1 − scale factor in linear transformation b 4 shift factor in linear transformation λ 2 exponent in powers transformation
#configurations L = 512
DeepVoro++ CCVD Ck = {c (ρi) k }Li=1
ρi ∈ {T} × {Pw,b,λ} × {M}
angle of rotation 4 − flipping or not 2 − scaling & cropping 8 − w = 1 − scale factor in linear transformation b 1‡ shift factor in linear transformation λ 2 exponent in powers transformation R 10 the number of top-R nearest baseprototypes for a novel prototype γ = 1 − weight for surrogate representation β 1‡ weight for feature representation
#configurations L = 1280
B POWER DIAGRAM SUBDIVISION AND VORONOI REDUCTION
B.1 PROOF OF THEOREM 3.1
Lemma B.1. The vertical projection from the lower envelope of the hyperplanes {Πk(z) : W Tk z+ bk}Kk=1 onto the input space Rn defines the cells of a PD.
Theorem 3.1 (Voronoi Diagram Reduction). The linear classifier parameterized by W , b partitions the input space Rn to a Voronoi Diagram with centers {c̃1, ..., c̃K} given by c̃k = 12Wk if bk = − 14 ||Wk|| 2 2, k = 1, ...,K.
Proof. We first articulate Lemma B.1 and find the exact relationship between the hyperplane Πk(z) and the center of its associated cell in Rn. By Definition 3.1, the cell for a point z ∈ Rn is found by comparing d(z, ck)2 − νk for different k, so we define the power function p(z, S) expressing this value
p(z, S) = (z − u)2 − r2 (11)
in which S ⊆ Rn is a sphere with center u and radius r. In fact, the weight ν associated with a center in Definition 3.1 can be intepreted as the square of the radius r2. Next, let U denote a paraboloid y = z2, let Π(S) be the transform that maps sphere S with center u and radius r into hyperplane
Π(S) : y = 2z · u− u · u + r2. (12)
It can be proved that Π is a bijective mapping between arbitrary spheres in Rn and nonvertical hyperplanes in Rn+1 that intersect U (Aurenhammer, 1987). Further, let z′ denote the vertical projection of z onto U and z′′ denote its vertical projection onto Π(S), then the power function can be written as
p(z, S) = d(z, z′)− d(z, z′′), (13)
which implies the following relationships between a sphere in Rn and an associated hyperplane in Rn+1 (Lemma 4 in Aurenhammer (1987)): let S1 and S2 be nonco-centeric spheres in Rn, then the bisector of their Power cells is the vertical projection of Π(S1) ∩ Π(S2) onto Rn. Now, we have a direct relationship between sphere S, and hyperplane Π(S), and comparing equation (12) with the hyperplanes used in logistic regression {Πk(z) : W Tk z + bk}Kk=1 gives us
u = 1
2 Wk
r2 = bk + 1
4 ||Wk||22.
(14)
Although there is no guarantee that bk + 14 ||Wk|| 2 2 is always positive for an arbitrary logistic regression model, we can impose a constraint on r2 to keep it be zero during the optimization, which implies
bk = − 1
4 ||Wk||22. (15)
By this way, the radii for allK spheres become identical (all zero). After the optimization of logistic regression model, the centers { 12Wk} K k=1 will be used for CIVD integration.
C DETAILS ABOUT THE DEMONSTRATIVE EXAMPLE ON MULTIDIGITMNIST DATASET
MultiDigitMNIST (Sun, 2019) dataset is created by concatenating two (or three) digits of different classes from MNIST for few-shot image classification. Here we use DoubleMNIST Datasets (i.e. two digits in an image) consisting of 100 classes (00 to 09), 1000 images of size 64 × 64 × 1 per class, and the classes are further split into 64, 20, and 16 classes for training, testing, and validation, respectively. To better embed into the R2 space, we pick a ten-classes subset (00, 01, 12, 13, 04, 05, 06, 77, 08, and 09) as the base classes for meta-training, and another five-class subset (02, 49, 83, 17, and 36) for one episode. The feature extractor is a 4-layer convolutional network with an additional fully-connected layer for 2D embedding. In Figure 1 left panel, the VD is obtained by setting the centroid of each base class as the Voronoi center. For each novel class, the Voronoi center is simply the 1-shot support sample (Figure 1 central panel). The surrogate representation is computed as the collection of distances from a support/query sample to each of the base classes, as shown in Figure 1 right panel. Interestingly, the surrogate representations for a novel class, no matter if it is a support sample (dotted line) or a query sample (colored line) generally follow a certain pattern — akin within a class, distinct cross class — make them ideal surrogates for distinguishing between different novel classes. In our paper, we design a series of algorithms answering multiple questions regarding this surrogate representation: how to select base classes for the calculation of surrogate representation, how to combine it with feature representation, and how to integrate it into the overall ensemble workflow.
D MAIN DATASETS
For a fair and thorough comparison with previous works, three widely-adopted benchmark datasets are used throughout this paper.
(1) mini-ImageNet (Vinyals et al., 2016) is a shrunk subset of ILSVRC-12 (Russakovsky et al., 2015), consists of 100 classes in which 64 classes for training, 20 classes for testing and 16 classes for validation. Each class has 600 images of size 84× 84× 3. (2) CUB (Welinder et al., 2010) is another benchmark dataset for FSL, especially fine-grained FSL, including 200 species (classes) of birds. CUB is an unbalanced dataset with 58 images in average per class, also of size 84 × 84 × 3. We split all classes into 100 base classes, 50 novel classes, and 50 validation classes, following previous works (Chen et al., 2019a).
(3) tiered-ImageNet (Ren et al., 2018) is another subset of ILSVRC-12 (Russakovsky et al., 2015) but has more images, 779,165 images in total. All images are categorized into 351 base classes, 97 validation classes, and 160 novel classes. The number of images in each class is not always the same, 1281 in average. The image size is also 84× 84× 3.
E DEEPVORO--: INTEGRATING PARAMETRIC AND NONPARAMETRIC METHODS VIA CIVD
Table E.3: Cluster-induced Voronoi Diagram (CIVD) for the integration of parametric Logistic Regression (LR) and nonparametric nearest neighbor (i.e. Voronoi Diagram, VD) methods. The results from S2M2 R and DC are also included in this table but excluded for comparison. Best result is marked in bold.
Methods mini-Imagenet CUB tiered-ImageNet
5-way 1-shot 5-way 5-shot 5-way 1-shot 5-way 5-shot 5-way 1-shot 5-way 5-shot
S2M2 R 64.65 ± 0.45 83.20 ± 0.30 80.14 ± 0.45 90.99 ± 0.23 68.12 ± 0.52 86.71 ± 0.34 DC 67.79 ± 0.45 83.69 ± 0.31 79.93 ± 0.46 90.77 ± 0.24 74.24 ± 0.50 88.38 ± 0.31 Power-LR 65.45 ± 0.44 84.47 ± 0.29 79.66 ± 0.44 91.62 ± 0.22 73.57 ± 0.48 89.07 ± 0.29 Voronoi-LR 65.58 ± 0.44 84.51 ± 0.29 79.63 ± 0.44 91.61 ± 0.22 73.65 ± 0.48 89.15 ± 0.29 VD 65.37 ± 0.44 84.37 ± 0.29 78.57 ± 0.44 91.31 ± 0.23 72.83 ± 0.49 88.58 ± 0.29
CIVD-based DeepVoro--
VD + Power-LR 65.63 ± 0.44 84.25 ± 0.30 79.52 ± 0.43 91.52 ± 0.22 73.68 ± 0.48 88.71 ± 0.29 VD + Voronoi-LR 65.85 ± 0.43 84.66 ± 0.29 79.40 ± 0.44 91.57 ± 0.22 73.78 ± 0.48 89.02 ± 0.29
E.1 EXPERIMENTAL SETUP AND IMPLEMENTATION DETAILS
In this section, we first establish three few-shot classification models with different underlying geometric structures, two logistic regression (LR) models and one nearest neighbor model: (1) Power Diagram-based LR (Power-LR), (2) Voronoi Diagram-based LR (Voronoi-LR), and (3) Voronoi Diagram (VD). Then, the main purposes of our analysis are (1) to examine how the performance is affected by the proposed Voronoi Reduction method in Sec. 3.2, and (2) to inspect whether VD can be integrated with Power/Voronoi Diagram-based LRs.
The feature transformation used throughout this section is Pw,b,λ with w = 1.0, b = 0.0, λ = 0.5. For Power-LR, we train it directly on the transformedK-wayN -shot support samples using PyTorch library with an Adam optimizer with batch size at 64 and learning rate at 0.01. For Voronoi-LR, the vanilla LR is retrofitted as shown in Algorithm 1, in which the bias is given by Theorem 3.1 to make sure that the parameters induce a VD in each iteration.
In our CIVD model in Definition 3.2, we use a cluster instead of a single prototype to stand for a novel class. Here this cluster contains two points, i.e. Ck = {ck, c̃k}, in which ck is obtained from VD, and c̃k is acquired from Power-LR or Voronoi-LR. The question we intend to answer here is that whether Power-LR or Voronoi-LR is the suitable model for the integration.
Algorithm 1: Voronoi Diagram-based Logistic Regression. Data: Support Set S Result: W 1 Initialize W ←W (0); 2 for epoch← 1, ..., #epoch do 3 bk ← − 14 ||Wk|| 2 2,∀k = 1, ...,K ; / Apply Theorem 3.1 4 Compute loss L(W , b) ; / forward propagation 5 Update W ; / backward propagation 6 end 7 return W
40 20 0 20
60
40
20
0
20
40
60
80 A. No Transformation
40 20 0 20 60
40
20
0
20
40
60
80 B. L2 Normalization
100 50 0
40
20
0
20
40
C. Power Transformation
40 60 80
40
20
0
20
40
D. Log Transformation
Figure F.1: The t-SNE visualizations of (A) original features, (B) L2 normalization, (C) Tukey’s Ladder of Powers Transformation with λ = 0.5, and (D) compositional transformation with λ = 0, w = 1, b = 0.04 of 5 novel classes from mini-ImageNet dataset.
E.2 RESULTS
The results are shown in Table E.3. Interestingly, when integrated with VD, Power-LR never reaches the best result, suggesting that VD and LR are intrinsic different geometric models, and cannot be simply integrated together without additional effort. On mini-ImageNet and tiered-ImageNet datasets, the best results are achieved by either Voronoi-LR or VD+Voronoi-LR, showing that CIVD coupled with the proposed Voronoi reduction can ideally integrate parametric and nonparametric few-shot models. Notably, on these two datasets, when Power-LR is reduced to Voronoi-LR, although the number of parameters is decreased (b is directly given by Theorem 3.1, not involved in the optimization), the performance is always better, for example, increases from 65.45% to 65.58% on 5-way 1-shot mini-ImageNet data. On CUB dataset, results of different models are similar, probably because CUB is a fine-grained dataset and all classes are similar to each other (all birds).
F DEEPVORO: IMPROVING FSL VIA HIERARCHICAL HETEROGENEITIES
F.1 EXPERIMENTAL SETUP AND IMPLEMENTATION DETAILS
In this section we describe feature-level and transformation-level heterogeneities that are used for ensemble in order to improve FSL. See the next section for geometry-level heterogeneity.
Feature-level heterogeneity. Considering the reproducibility of the methodology, we only employ deterministic data augmentation upon the images without randomness involved. Specifically, three kinds of data augmentation techniques are used. (1) Rotation is an important augmentation method widely used in self-supervised learning (Mangla et al., 2020). Rotating the original images by 0°, 90°, 180°, and 270°gives us four ways of augmentation. (2) After rotation, we can flip the images horizontally, giving rise to additional two choices after each rotation degree. (3) Central cropping after scaling can alter the resolution and focus area of the image. Scaling the original images to (84+B)×(84+B),B increasing from 0 to 70 with step 10, bringing us eight ways of augmentation.
Finally, different combinations of the three types result in 64 kinds of augmentation methods (i.e. |{T}| = 64). Transformation-level heterogeneity. In our compositional transformation, the function (hλ◦gw,b◦ f)(z) is parameterized by w, b, λ. Since g is appended after the L2 normalization f , the vector comes into g is always a unit vector, so we simply set w = 1. For the different combinations of λ and b, we test different values with either λ = 0 or λ 6= 0 on the hold-out validation set (as shown in Figure 2 and K.12), and pick top-8 combinations with the best performance on the validation set.
Ensemble Schemes. Now, in our configuration pool {T} × {Pw,b,λ}, there are 512 possible configurations {ρ(i)}512i=1. For each ρ, we apply it on both the testing and the validation sets. With this large pool of ensemble candidates, how and whether to select a subset {ρ(i)}L′i=1 ⊆ {ρ(i)}512i=1 is still a nontrivial problem. Here we explore three different schemes. (1) Full (vanilla) ensemble. All candidates in {ρ(i)}512i=1 are taken into consideration and then plugged into Definition 3.5 to build the CIVD for space partition. (2) Random ensemble. A randomly selected subset with size L′ < L is used for ensemble. (3) Guided ensemble. We expect the performance for {ρ(i)}512i=1 on the validation set can be used to guide the selection of {ρ(i)}L′i=1 from the testing set, provided that there is good correlation between the testing set and the validation set. Specifically, we rank the configurations in the validation set with regard to their performance, and add them sequentially into {ρ(i)}L′i=1 until a maximum ensemble performance is reached on the validation set, then we use this configuration set for the final ensemble. Since VD is nonparametric and fast, we adopt VD as the building block and only use VD for each ρ for the remaining part of the paper. The α value in the influence function (Definition 3.3) is set at 1 throughout the paper, for the simplicity of computation.
For a fair comparison, we downloaded the trained models1 used by Mangla et al. (2020) and Yang et al. (2021). The performance of FSL algorithms is typically evaluated by a sequence of independent episodes, so the data split and random seed for the selection of novel classes as well as support/query set in each episode will all lead to different result. To ensure the fairness of our evaluation, DC (Yang et al., 2021), and S2M2 R (Mangla et al., 2020) are reevaluated with the same data split and random seed as DeepVoro. The results are obtained by running 2000 episodes and the average accuracy as well as 95% confidence intervals are reported.
F.2 RESULTS
Table F.4: Ablation study of DeepVoro’s performance with different levels of ensemble. The number of ensemble members are given in parentheses.
Methods Feature-level Transformation-level mini-ImageNet CUB tiered-ImageNet
1-shot 5-shot 1-shot 5-shot 1-shot 5-shot
No Ensemble 8 8 65.37 ± 0.44 84.37 ± 0.29 78.57 ± 0.44 91.31 ± 0.23 72.83 ± 0.49 88.58 ± 0.29 Vanilla Ensemble (8) 8 4 66.45 ± 0.44 84.55 ± 0.29 80.98 ± 0.44 91.47 ± 0.22 74.02 ± 0.49 88.90 ± 0.29 Vanilla Ensemble (64) 4 8 67.88 ± 0.45 86.39 ± 0.29 77.30 ± 0.43 91.26 ± 0.23 73.74 ± 0.49 88.67 ± 0.29 Vanilla Ensemble (512) 4 4 69.23 ± 0.45 86.70 ± 0.28 79.90 ± 0.43 91.70 ± 0.22 74.51 ± 0.48 89.11 ± 0.29 Random Ensemble (512) 4 4 69.30 ± 0.45 86.74 ± 0.28 80.40 ± 0.43 91.94 ± 0.22 74.64 ± 0.48 89.15 ± 0.29 Guided Ensemble (512) 4 4 69.48 ± 0.45 86.75 ± 0.28 82.99 ± 0.43 92.62 ± 0.22 74.98 ± 0.48 89.40 ± 0.29
Our proposed compositional transformation enlarges the expressivity of the transformation function. When the Tukey’s ladder of powers transformation is used individually, as reported in Yang et al. (2021), the optimal λ is not 0, but if an additional linear transformation g is inserted between f and h, λ = 0 coupled with a proper b can give even better result, as shown in Figure 2 and K.12. Importantly, from Figure 2, a combination of λ and b with good performance on the validation set can also produce satisfactory result on the testing set, suggesting that it is possible to optimize the hyperparameters on the validation set and generalize well on the testing set. In terms of the polymorphism induced by various transformations in the feature space, Figure F.1 exhibits the t-SNE visualizations of the original features and the features after three different kinds of transformations, showing that the relative positions of different novel classes is largely changes especially after compositional transformation (as shown in D). Besides commonly used data augmentation, this transformation provides another level of diversity that may be beneficial to the subsequent ensemble.
The results for different levels of ensemble are shown in Table F.4, in which the number of ensemble member are also indicated. Although transformation ensemble does not involve any change to the feature, it can largely improve the results for 1-shot FSL, from 65.37% to 66.45% on mini-ImageNet,
1downloaded from https://github.com/nupurkmr9/S2M2_fewshot
from 78.57% to 80.98% on CUB, and from 72.83% to 74.02% on tiered-ImageNet, respectively, probably because 1-shot FSL is more prone to overfitting due to its severe data deficiency. Featurelevel ensemble, on the other hand, is more important for 5-shot FSL, especially for mini-ImageNet. When combining the two levels together, the number of ensemble members increases to 512 and the performance significantly surpasses each individual level. On all three datasets, the guided ensemble scheme always achieves the best result for both single-shot and multi-shot cases, showing that the validation set can indeed be used for the guidance of subset selection and our method is robust cross classes in the same domain. When there is no such validation set available, the full ensemble and random ensemble schemes can also give comparable result.
To inspect how performance changes with different number of ensemble members, we exhibit the distribution of accuracy at three ensemble levels for mini-ImageNet in Figure F.2 and F.3 , for CUB in Figure F.4 and F.5, and for tiered-ImageNet in Figure F.6 and F.7. Figure (b) in each of them also exhibits the correlation between the testing and validation sets for all 512 configurations. Clearly, better result is often reached when there are more configurations for the ensemble, validating the efficacy of our method for improving the performance and robustness for better FSL. Algorithm 2: VD with Surrogate Representation for Episode T .
Data: Base classes D, Support Set S = {(xi, yi)}K×Ni=1 , yi ∈ CT , query sample x Result: d̃
1 D′ ← (Pw,b,λ ◦ φ ◦ T )(D) ; / Extract and transform feature 2 S ′ ← (Pw,b,λ ◦ φ ◦ T )(S); 3 z ← (Pw,b,λ ◦ φ ◦ T )(x); 4 for t← 1, ..., |Cbase|; / Compute prototypes of base classes 5 do 6 c′t ← 1|{(z′,y)|z′∈D′,y=t}| ∑ z′∈D′,y=tz ′ 7 end 8 for k ← 1, ...,K; / Compute prototypes from support samples 9 do
10 ck ← 1N ∑ z′∈S′,y=k z ′; 11 dk ← d(z, ck) 12 end 13 Csurrogate ← ∅; 14 for k ← 1, ...,K; / Find surrogate classes 15 do 16 Csurrogate ← Csurrogate ⋃ Top-R
t∈{1,...,|Cbase|} d(ck, c
′ t)
17 end 18 R̃← |Csurrogate|; 19 d′ ← (d(z, c′1), ..., d(z, c′R̃)) ; / Compute surrogate representation for query sample 20 for k ← 1, ...,K; / Compute surrogate representations for support samples 21 do 22 d′k ← (d(ck, c′1), ..., d(ck, c′R̃)); 23 d′′k ← d(d′,d′k) 24 end 25 d̃← β d||d||1 + γ d′′ ||d′′||1 ; / Compute final criterion 26 return d̃
1 2 3 4 5 6 7 8 Number of Ensemble Members
84.35
84.40
84.45
84.50
84.55
84.60
Ac cu
ra cy
mini-ImageNet transformation-level Ensemble Full Ensemble
(a) Transformation-level Ensemble
82 83 84 85 86 87 Testing Set Accuracy
85.0
85.5
86.0
86.5
87.0
87.5
88.0
88.5
Va lid
at io
n Se
t A cc
ur ac
y
Transformation-level Feature-level Dirichlet Tessellation Ensemble DC S2M2-R
mini-ImageNet 5-way 5-shot
Ensemble Ensemble
eepVoro
(b) Testing/Validation Sets Correlation
1 2 3 4 5 6 7 8 9 10111213141516171819202122232425262728293031323334353637383940414243444546474849505152535455565758596061626364 Number of Ensemble Members
84.0
84.5
85.0
85.5
86.0
86.5
Ac cu
ra cy
mini-ImageNet feature-level Ensemble
Full Ensemble
(c) Feature-level Ensemble on 5-way 5-shot mini-ImageNet Dataset
0 100 200 300 400 500 Number of Ensemble Members
84.5
85.0
85.5
86.0
86.5
Ac cu
ra cy
mini-ImageNet Dirichlet Tessellation Ensemble
Random Ensemble Guided Ensemble Full Ensemble
(d) DeepVoro on 5-way 5-shot mini-ImageNet Dataset
Figure F.2: Three levels of ensemble and the correlation between testing and validation sets with different configurations in the configuration pool.
1 2 3 4 5 6 7 8 Number of Ensemble Members
65.4
65.6
65.8
66.0
66.2
66.4
66.6
66.8
Ac cu
ra cy
mini-ImageNet transformation-level Ensemble
Full Ensemble
(a) Transformation-level Ensemble
63 64 65 66 67 68 69 Testing Set Accuracy
66
67
68
69
70
71
72
Va lid
at io
n Se
t A cc
ur ac
y
Transformation-level Feature-level Dirichlet Tessellation Ensemble DC S2M2-R
mini-ImageNet 5-way 1-shot
Ensemble Ensemble
eepVoro
(b) Testing/Validation Sets Correlation
1 2 3 4 5 6 7 8 9 10111213141516171819202122232425262728293031323334353637383940414243444546474849505152535455565758596061626364 Number of Ensemble Members
65.0
65.5
66.0
66.5
67.0
67.5
68.0
Ac cu
ra cy
mini-ImageNet feature-level Ensemble
Full Ensemble
(c) Feature-level Ensemble on 5-way 1-shot mini-ImageNet Dataset
0 100 200 300 400 500 Number of Ensemble Members
66.0
66.5
67.0
67.5
68.0
68.5
69.0
69.5
Ac cu
ra cy
mini-ImageNet Dirichlet Tessellation Ensemble
Random Ensemble Guided Ensemble Full Ensemble
(d) DeepVoro on 5-way 1-shot mini-ImageNet Dataset
Figure F.3: Three levels of ensemble and the correlation between testing and validation sets with different configurations in the configuration pool.
1 2 3 4 5 6 7 8 Number of Ensemble Members
90.6
90.8
91.0
91.2
91.4
91.6
Ac cu
ra cy
CUB transformation-level Ensemble Full Ensemble
(a) Transformation-level Ensemble
82 84 86 88 90 92 Testing Set Accuracy
80
82
84
86
88
90
92
Va lid
at io
n Se
t A cc
ur ac
y
Transformation-level Feature-level Dirichlet Tessellation Ensemble DC S2M2-R
CUB 5-way 5-shot
Ensemble Ensemble
eepVoro
(b) Testing/Validation Sets Correlation
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 Number of Ensemble Members
84
86
88
90
92
Ac cu
ra cy
CUB feature-level Ensemble Full Ensemble
(c) Feature-level Ensemble on 5-way 5-shot CUB Dataset
0 100 200 300 400 500 Number of Ensemble Members
86
88
90
92
Ac cu
ra cy
CUB Dirichlet Tessellation Ensemble
Random Ensemble Guided Ensemble Full Ensemble
(d) DeepVoro on 5-way 5-shot CUB Dataset
Figure F.4: Three levels of ensemble and the correlation between testing and validation sets with different configurations in the configuration pool.
1 2 3 4 5 6 7 8 Number of Ensemble Members
79.50
79.75
80.00
80.25
80.50
80.75
81.00
81.25
81.50
Ac cu
ra cy
CUB transformation-level Ensemble Full Ensemble
(a) Transformation-level Ensemble
62.5 65.0 67.5 70.0 72.5 75.0 77.5 80.0 82.5 Testing Set Accuracy
62.5
65.0
67.5
70.0
72.5
75.0
77.5
80.0
82.5
Va lid
at io
n Se
t A cc
ur ac
y
Transformation-level Feature-level Dirichlet Tessellation Ensemble DC S2M2-R
CUB 5-way 1-shot
Ensemble Ensemble
eepVoro
(b) Testing/Validation Sets Correlation
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 Number of Ensemble Members
68
70
72
74
76
78
80
Ac cu
ra cy
CUB feature-level Ensemble Full Ensemble
(c) Feature-level Ensemble on 5-way 1-shot CUB Dataset
0 100 200 300 400 500 Number of Ensemble Members
67.5
70.0
72.5
75.0
77.5
80.0
82.5
Ac cu
ra cy
CUB Dirichlet Tessellation Ensemble
Random Ensemble Guided Ensemble Full Ensemble
(d) DeepVoro on 5-way 1-shot CUB Dataset
Figure F.5: Three levels of ensemble and the correlation between testing and validation sets with different configurations in the configuration pool.
1 2 3 4 5 6 7 8 Number of Ensemble Members
88.65
88.70
88.75
88.80
88.85
88.90
88.95
Ac cu
ra cy
tiered-ImageNet transformation-level Ensemble
Full Ensemble
(a) Transformation-level Ensemble
85 86 87 88 89 90 Testing Set Accuracy
83
84
85
86
87
Va lid
at io
n Se
t A cc
ur ac
y
Transformation-level Feature-level Dirichlet Tessellation Ensemble DC S2M2-R
tiered-ImageNet 5-way 5-shot
Ensemble Ensemble
eepVoro
(b) Testing/Validation Sets Correlation
1 2 3 4 5 6 7 8 9 10111213141516171819202122232425262728293031323334353637383940414243444546474849505152535455565758596061626364 Number of Ensemble Members
86.5
87.0
87.5
88.0
88.5
89.0
Ac cu
ra cy
tiered-ImageNet feature-level Ensemble
Full Ensemble
(c) Feature-level Ensemble on 5-way 5-shot tiered-ImageNet Dataset
0 100 200 300 400 500 Number of Ensemble Members
85
86
87
88
89
Ac cu
ra cy
tiered-ImageNet Dirichlet Tessellation Ensemble
Random Ensemble Guided Ensemble Full Ensemble
(d) DeepVoro on 5-way 5-shot tiered-ImageNet Dataset
Figure F.6: Three levels of ensemble and the correlation between testing and validation sets with different configurations in the configuration pool.
1 2 3 4 5 6 7 8 Number of Ensemble Members
73.2
73.4
73.6
73.8
74.0
Ac cu
ra cy
tiered-ImageNet transformation-level Ensemble
Full Ensemble
(a) Transformation-level Ensemble
68 69 70 71 72 73 74 75 Testing Set Accuracy
65
66
67
68
69
70
71
72
73
Va lid
at io
n Se
t A cc
ur ac
y
Transformation-level Feature-level Dirichlet Tessellation Ensemble DC S2M2-R
tiered-ImageNet 5-way 1-shot
Ensemble Ensemble
eepVoro
(b) Testing/Validation Sets Correlation
1 2 3 4 5 6 7 8 9 10111213141516171819202122232425262728293031323334353637383940414243444546474849505152535455565758596061626364 Number of Ensemble Members
71.0
71.5
72.0
72.5
73.0
73.5
74.0
Ac cu
ra cy
tiered-ImageNet feature-level Ensemble
Full Ensemble
(c) Feature-level Ensemble on 5-way 1-shot tiered-ImageNet Dataset
0 100 200 300 400 500 Number of Ensemble Members
70
71
72
73
74
75
Ac cu
ra cy
tiered-ImageNet Dirichlet Tessellation Ensemble
Random Ensemble Guided Ensemble Full Ensemble
(d) DeepVoro on 5-way 1-shot tiered-ImageNet Dataset
Figure F.7: Three levels of ensemble and the correlation between testing and validation sets with different configurations in the configuration pool.
0 100 200 300 400 Number of Shots
65
70
75
80
85
90
Ac cu
ra cy
1: 65.31%
3: 80.12%
5: 84.05% 7: 85.95% 10: 87.60% 15: 88.91%20: 89.75%
40: 90.63% 100: 91.18% 200: 91.22% 400: 91.55%
Effect of the number of shots on mini-ImageNet dataset
Figure G.8: The accuracy of VD with increasing number of shots on mini-ImageNet dataset.
G DEEPVORO++: FURTHER IMPROVEMENT OF FSL VIA SURROGATE REPRESENTATION
G.1 EXPERIMENTAL SETUP AND IMPLEMENTATION DETAILS
In this section, we introduce another layer of heterogeneity, that is, geometry-level, that exists in our surrogate representation. In Definition 3.4, increasing R will enlarge the degree of locality when searching for the top-R surrogate classes. In equation (8), if we set γ = 1 then increasing β will make the model rely more on the feature representation and less on the surrogate representation. In order to weigh up R and β, we perform a grid search for different combinations of R and β on the validation set, as shown in Figure K.13, K.14, and K.15. For each R, we select the β that gives rise to the best result on the validation set, and use this (R, β) on the testing set, resulting in 10 such pairs in total. So there are 10 models in the geometry-level heterogeneity, standing for different degrees of locality. In conjunction with feature-level (64 kinds of augmentations) and transformation-level (here only the top-2 best transformations are used) heterogeneities, now there are 1280 different kinds of configurations in our configuration pool that will be used by the CCVD model. In conclusion, there are overall 512 + 1280 = 1792 configurations for a few-shot episode. Generating∼1800 ensemble candidates is nearly intractable for parametric methods like logistic regression or cosine classifier, which may consume e.g. months for thousands of episodes. However, the VD model is nonparametric and highly efficient, making it empirically possible to collect all the combinations and integrate them all together via CCVD. The complete algorithm for the computation of surrogate representation is shown in Algorithm 2.
G.2 RESULTS
The heatmaps for different (R, β) pairs on testing/validation sets are shown in Figure K.13 for miniImageNet, in Figure K.14 for CUB, and in Figure K.15 for tiered-ImageNet, respectively. Basically, the testing and validation set follow the same pattern. When R is small, i.e. only a small number of base classes are used for surrogate, then a higher weight should be placed on feature representation. With a fixed β, increasing R beyond a certain threshold will potentially cause a drop in accuracy, probably because the meaningful similarities is likely to be overwhelmed by the signals from the large volume of irrelevant base classes.
Table G.5: Ablation study of DeepVoro++’s performance with different levels of ensemble. The number of ensemble members are given in parentheses.
Methods Feature-level Transformation-level Geometry-level mini-ImageNet CUB tiered-ImageNet
No Ensemble 8 8 8 65.37 ± 0.44 78.57 ± 0.44 72.83 ± 0.49 Vanilla Ensemble (20) 8 4 4 68.38 ± 0.46 80 | 1. What is the main contribution of the paper in few-shot learning?
2. What are the strengths of the proposed approach, particularly in its novel perspective?
3. What are the weaknesses of the paper regarding the ablation study and computational cost? | Summary Of The Paper
Review | Summary Of The Paper
This paper provides a new geometric point of view for few-shot learning (FSL). In this view, the widely used ProtoNet can be regarded as a Dirichlet Tessellation (Voronoi Diagram) in the feature space. Furthermore, the authors use the recent Cluster-induced Voronoi Diagram (CIVD) for FSL and propose an ensemble approach to achieve a stronger FSL model. Extensive experimental results on three standard benchmarks demonstrate the effectiveness of the proposed method.
Review
Strengths 1. The new geometric point of view of FSL is novel and interesting. This paper makes a bridge between computational geometry and FSL. 2. This approach achieves the new state-of-the-art performance over three standard datasets, including mini-ImageNet, CUB, and tiered-ImagenNet. This demonstrates its effectiveness.
Weaknesses
The ablation study seems to be not enough. What is the performance of the proposed method if the ensemble strategy is removed? The reviewer did not find the corresponding experimental results.
Usually, the ensemble strategy is time-cost. What is the computation time cost of the proposed methods, especially for Deep Voro++. This method with 1280 configurations seems to require a large amount of computation time. |
ICLR | Title
Few-shot Learning via Dirichlet Tessellation Ensemble
Abstract
Few-shot learning (FSL) is the process of rapid generalization from abundant base samples to inadequate novel samples. Despite extensive research in recent years, FSL is still not yet able to generate satisfactory solutions for a wide range of real-world applications. To confront this challenge, we study the FSL problem from a geometric point of view in this paper. One observation is that the widely embraced ProtoNet model is essentially a Voronoi Diagram (VD) in the feature space. We retrofit it by making use of a recent advance in computational geometry called Cluster-induced Voronoi Diagram (CIVD). Starting from the simplest nearest neighbor model, CIVD gradually incorporates cluster-to-point and then cluster-to-cluster relationships for space subdivision, which is used to improve the accuracy and robustness at multiple stages of FSL. Specifically, we use CIVD (1) to integrate parametric and nonparametric few-shot classifiers; (2) to combine feature representation and surrogate representation; (3) and to leverage feature-level, transformation-level, and geometry-level heterogeneities for a better ensemble. Our CIVD-based workflow enables us to achieve new state-of-the-art results on mini-ImageNet, CUB, and tiered-ImagenNet datasets, with ∼2%−5% improvements upon the next best. To summarize, CIVD provides a mathematically elegant and geometrically interpretable framework that compensates for extreme data insufficiency, prevents overfitting, and allows for fast geometric ensemble for thousands of individual VD. These together make FSL stronger.
1 INTRODUCTION
Recent years have witnessed a tremendous success of deep learning in a number of data-intensive applications; one critical reason for which is the vast collection of hand-annotated high-quality data, such as the millions of natural images for visual object recognition (Deng et al., 2009). However, in many real-world applications, such large-scale data acquisition might be difficult and comes at a premium, such as in rare disease diagnosis (Yoo et al., 2021) and drug discovery (Ma et al., 2021b; 2018). As a consequence, Few-shot Learning (FSL) has recently drawn growing interests (Wang et al., 2020).
Generally, few-shot learning algorithms can be categorized into two types, namely inductive and transductive, depending on whether estimating the distribution of query samples is allowed. A typical transductive FSL algorithm learns to propagate labels among a larger pool of query samples in a semi-supervised manner (Liu et al., 2019); notwithstanding its normally higher performance, in many real world scenarios a query sample (e.g. patient) also comes individually and is unique, for instance, in personalized pharmacogenomics (Sharifi-Noghabi et al., 2020). Thus, we in this paper adhere to the inductive setting and make on-the-fly prediction for each newly seen sample.
Few-shot learning is challenging and substantially different from conventional deep learning, and has been tackled by many researchers from a wide variety of angles. Despite the extensive research
All four authors are corresponding authors.
on the algorithmic aspects of FSL (see Sec. 2), two challenges still pose an obstacle to successful FSL: (1) how to sufficiently compensate for the data deficiency in FSL? and (2) how to make the most use of the base samples and the pre-trained model?
For the first question, data augmentation has been a successful approach to expand the size of data, either by Generative Adversarial Networks (GANs) (Goodfellow et al., 2014) (Li et al., 2020b; Zhang et al., 2018) or by variational autoencoders (VAEs) (Kingma & Welling, 2014) (Zhang et al., 2019; Chen et al., 2019b). However, in each way, the authenticity of either the augmented data or the feature is not guaranteed, and the out-of-distribution hallucinated samples (Ma et al., 2019) may hinder the subsequent FSL. Recently, Liu et al. (2020b) and Ni et al. (2021) investigate supportlevel, query-level, task-level, and shot-level augmentation for meta-learning, but the diversity of FSL models has not been taken into consideration. For the second question, Yang et al. (2021) borrows the top-2 nearest base classes for each novel sample to calibrate its distribution and to generate more novel samples. However, when there is no proximal base class, this calibration may utterly alter the distribution. Another line of work (Sbai et al., 2020; Zhou et al., 2020) learns to select and design base classes for a better discrimination on novel classes, which all introduce extra training burden. As a matter of fact, we still lack a method that makes full use of the base classes and the pretrained model effectively.
In this paper, we study the FSL problem from a geometric point of view. In metric-based FSL, despite being surprisingly simple, the nearest neighbor-like approaches, e.g. ProtoNet (Snell et al., 2017) and SimpleShot (Wang et al., 2019), have achieved remarkable performance that is even better than many sophisticatedly designed methods. Geometrically, what a nearest neighbor-based method does, under the hood, is to partition the feature space into a Voronoi Diagram (VD) that is induced by the feature centroids of the novel classes. Although it is highly efficient and simple, Voronoi Diagrams coarsely draw the decision boundary by linear bisectors separating two centers, and may lack the ability to subtly delineate the geometric structure arises in FSL.
To resolve this issue, we adopt a novel technique called Cluster-induced Voronoi Diagram (CIVD) (Chen et al., 2013; 2017; Huang & Xu, 2020; Huang et al., 2021), which is a recent breakthrough in computation geometry. CIVD generalizes VD from a point-to-point distance-based diagram to a cluster-to-point influence-based structure. It enables us to determine the dominating
region (or Voronoi cell) not only for a point (e.g. a class prototype) but also for a cluster of points, guaranteed to have a (1 + )-approximation with a nearly linear size of diagram for a wide range of locally dominating influence functions. CIVD provides us a mathematically elegant framework to depict the feature space and draw the decision boundary more precisely than VD without losing the resistance to overfitting.
Accordingly, in this paper, we show how CIVD is used to improve multiple stages of FSL and make several contributions as follows.
1. We first categorize different types of few-shot classifiers as different variants of Voronoi Diagram: nearest neighbor model as Voronoi Diagram, linear classifier as Power Diagram, and cosine classifier as spherical Voronoi Diagram (Table 1). We then unify them via CIVD that enjoys the advantages of multiple models, either parametric or nonparametric (denoted as DeepVoro--).
2. Going from cluster-to-point to cluster-to-cluster influence, we further propose Cluster-to-cluster Voronoi Diagram (CCVD), as a natural extension of CIVD. Based on CCVD, we present DeepVoro which enables fast geometric ensemble of a large pool of thousands of configurations for FSL.
3. Instead of using base classes for distribution calibration and data augmentation (Yang et al., 2021), we propose a novel surrogate representation, the collection of similarities to base classes, and thus promote DeepVoro to DeepVoro++ that integrates feature-level, transformation-level, and geometry-level heterogeneities in FSL.
Extensive experiments have shown that, although a fixed feature extractor is used without independently pretrained or epoch-wise models, our method achieves new state-of-the-art results on all
three benchmark datasets including mini-ImageNet, CUB, and tiered-ImageNet, and improves by up to 2.18% on 5-shot classification, 2.53% on 1-shot classification, and up to 5.55% with different network architectures.
2 RELATED WORK
Few-Shot Learning. There are a number of different lines of research dedicated to FSL. (1) Metricbased methods employ a certain distance function (cosine distance (Mangla et al., 2020; Xu et al., 2021), Euclidean distance (Wang et al., 2019; Snell et al., 2017), or Earth Mover’s Distance (Zhang et al., 2020a;b)) to bypass the optimization and avoid possible overfitting. (2) Optimization-based approaches (Finn et al., 2017) manages to learn a good model initialization that accelerates the optimization in the meta-testing stage. (3) Self-supervised-based (Zhang et al., 2021b; Mangla et al., 2020) methods incorporate supervision from data itself to learn a robuster feature extractor. (4) Ensemble method is another powerful technique that boosting the performance by integrating multiple models (Ma et al., 2021a). For example, Dvornik et al. (2019) trains several networks simultaneously and encourages robustness and cooperation among them. However, due to the high computation load of training deep models, this ensemble is restricted by the number of networks which is typically <20. In Liu et al. (2020c), instead, the ensemble consists of models learned at each epoch, which, may potentially limit the diversity of ensemble members.
Geometric Understanding of Deep Learning. The geometric structure of deep neural networks is first hinted at by Raghu et al. (2017) who reveals that piecewise linear activations subdivide input space into convex polytopes. Then, Balestriero et al. (2019) points out that the exact structure is a Power Diagram (Aurenhammer, 1987) which is subsequently applied upon recurrent neural network (Wang et al., 2018) and generative model (Balestriero et al., 2020). The Power/Voronoi Diagram subdivision, however, is not necessarily the optimal model for describing feature space. Recently, Chen et al. (2013; 2017); Huang et al. (2021) uses an influence function F (C, z) to measure the joint influence of all objects in C on a query z to build a Cluster-induced Voronoi Diagram (CIVD). In this paper, we utilize CIVD to magnify the expressivity of geometric modeling for FSL.
3 METHODOLOGY
3.1 PRELIMINARIES
Few-shot learning aims at discriminating between novel classes Cnovel with the aid of a larger amount of samples from base classes Cbase, Cnovel∩Cbase = ∅. The whole learning process usually follows the
meta-learning scheme. Formally, given a dataset of base classes D = {(xi, yi)},xi ∈ D, yi ∈ Cbase with D being an arbitrary domain e.g. natural image, a deep neural network z = φ(x), z ∈ Rn, which maps from image domain D to feature domain Rn, is trained using standard gradient descent algorithm, and after which φ is fixed as a feature extractor. This process is referred to as metatraining stage that squeezes out the commonsense knowledge from D. For a fair evaluation of the learning performance on a few samples, the meta-testing stage is typically formulated as a series of K-way N -shot tasks (episodes) {T }. Each such episode is further decomposed into a support set S = {(xi, yi)}K×Ni=1 , yi ∈ CT and a query set Q = {(xi, yi)} K×Q i=1 , yi ∈ CT , in which the episode classes CT is a randomly sampled subset of Cnovel with cardinality K, and each class contains onlyN andQ random samples in the support set and query set, respectively. For few-shot classification, we introduce here two widely used schemes as follows. For simplicity, all samples here are from S and Q, without data augmentation applied. Nearest Neighbor Classifier (Nonparametric). In Snell et al. (2017); Wang et al. (2019) etc., a prototype ck is acquired by averaging over all supporting features for a class k ∈ CT :
ck = 1
N
∑ x∈S,y=k φ(x) (1)
Then each query sample x ∈ Q is classified by finding the nearest prototype: ŷ = arg minkd(z, ck) = ||z − ck||22, in which we use Euclidean distance for distance metric d. Linear Classifier (Parametric). Another scheme uses a linear classifier with cross-entropy loss optimized on the supporting samples:
L(W , b) = ∑ (x,y)∈S − log p(y|φ(x);W , b) = ∑
(x,y)∈S − log exp(W Ty φ(x) + by)∑ k exp(W T k φ(x) + bk) (2)
in which Wk, bk are the linear weight and bias for class k, and the predicted class for query x ∈ Q is ŷ = arg maxk p(y|z;Wk, bk).
3.2 FEW-SHOT LEARNING AS CLUSTER-INDUCED VORONOI DIAGRAMS
In this section, we first introduce the basic concepts of Voronoi Tessellations, and then show how parametric/nonparametric classifier heads can be unified by VD.
Definition 3.1 (Power Diagram and Voronoi Diagram). Let Ω = {ω1, ..., ωK} be a partition of the space Rn, and C = {c1, ..., cK} be a set of centers such that ∪Kr=1ωr = Rn,∩Kr=1ωr = ∅. Additionally, each center is associated with a weight νr ∈ {ν1, ..., νK} ⊆ R+. Then the set of pairs {(ω1, c1, ν1), ..., (ωK , cL, νK)} is a Power Diagram (PD), where each cell is obtained via ωr = {z ∈ Rn : r(z) = r}, r ∈ {1, ..,K}, with
r(z) = arg min k∈{1,...,K}
d(z, ck) 2 − νk. (3)
If the weights are equal for all k, i.e. νk = νk′ ,∀k, k′ ∈ {1, ...,K}, then a PD collapses to a Voronoi Diagram (VD).
By definition, it is easy to see that the nearest neighbor classifier naturally partitions the space into K cells with centers {c1, ..., cK}. Here we show that the linear classifier is also a VD under a mild condition.
Theorem 3.1 (Voronoi Diagram Reduction). The linear classifier parameterized by W , b partitions the input space Rn to a Voronoi Diagram with centers {c̃1, ..., c̃K} given by c̃k = 12Wk if bk = − 14 ||Wk|| 2 2, k = 1, ...,K.
Proof. See Appendix B for details.
3.2.1 FROM VORONOI DIAGRAM TO CLUSTER-INDUCED VORONOI DIAGRAM
Now that both nearest neighbor and linear classifier have been unified by VD, a natural idea is to integrate them together. Cluster-induced Voronoi Diagram (CIVD) (Chen et al., 2017; Huang et al.,
2021) is a generalization of VD which allows multiple centers in a cell, and is successfully used for clinical diagnosis from biomedical images (Wang et al., 2015), providing an ideal tool for the integration of parametric/nonparametric classifier for FSL. Formally: Definition 3.2 (Cluster-induced Voronoi Diagram (CIVD) (Chen et al., 2017; Huang et al., 2021)). Let Ω = {ω1, ..., ωK} be a partition of the space Rn, and C = {C1, ..., CK} be a set (possibly a multiset) of clusters. The set of pairs {(ω1, C1), ..., (ωK , CK)} is a Cluster-induced Voronoi Diagram (CIVD) with respect to the influence function F (Ck, z), where each cell is obtained via ωr = {z ∈ Rn : r(z) = r}, r ∈ {1, ..,K}, with
r(z) = arg max k∈{1,...,K}
F (Ck, z). (4)
Here C can be either a given set of clusters or even the whole power set of a given point set, and the influence function is defined as a function over the collection of distances from each member in a cluster Ck to a query point z: Definition 3.3 (Influence Function). The influence from Ck, k ∈ {1, ...,K} to z /∈ Ck is F (Ck, z) = F ({d(c(i)k , z)|c (i) k ∈ Ck} |Ck| i=1). In this paper F is assumed to have the following form
F (Ck, z) = − sign(α) ∑|Ck| i=0 d(c (i) k , z) α. (5)
The sign function here makes sure that F is a monotonically decreasing function with respect to distance d. The hyperparameter α controls the magnitude of the influence, for example, in gravity force α = −(n− 1) in n-dimensional space and in electric force α = −2. Since the nearest neighbor centers {ck}Kk=1 and the centers introduced by linear classifier {c̃k}Kk=1 are obtained from different schemes and could both be informative, we merge the corresponding centers for a novel class k to be a new cluster Ck = {ck, c̃k}, and use the resulting C = {C1, ..., CK} to establish a CIVD. In such a way, the final partition may enjoy the advantages of both parametric and nonparametric classifier heads. We name this approach as DeepVoro--.
3.3 FEW-SHOT CLASSIFICATION VIA SURROGATE REPRESENTATION
In nearest neighbor classifier head, the distance from a query feature z to each of the prototypes {ck}Kk=1 is the key discrimination criterion for classification. We rewrite {d(z, ck)}Kk=1 to be a vector d ∈ RK such that dk = d(z, ck). These distances are acquired by measure the distance between two points in high dimension: z, ck ∈ Rn. However, the notorious behavior of high dimension is that the ratio between the nearest and farthest points in a point set P approaches 1 (Aggarwal et al., 2001), making {d(z, ck)}Kk=1 less discriminative for classification, especially for FSL problem with sample size N ·K n. Hence, in this paper, we seek for a surrogate representation. In human perception and learning system, similarity among familiar and unfamiliar objects play a key role for object categorization and classification (Murray et al., 2002), and it has been experimentally verified by functional magnetic resonance imaging (fMRI) that a large region in occipitotemporal cortex processes the shape of both meaningful and unfamiliar objects (Op de Beeck et al., 2008). In our method, a connection will be built between each unfamiliar novel class in Cnovel and each related well-perceived familiar class in Cbase. So the first step is to identify the most relevant base classes for a specific task T . Concretely: Definition 3.4 (Surrogate Classes). In episode T , given the set of prototypes {ck}Kk=1 for the support set S and the set of prototypes {c′t} |Cbase| t=1 for the base set D, the surrogate classes for episode classes CT is given as:
Csurrogate(T ) = K⋃ k=1 Top-R t∈{1,...,|Cbase|} d(ck, c ′ t) (6)
in which the top-R function returnsR base class indices with smallest distances to ck, and the center for a base class t is given as c′t = 1 |{(x,y)|x∈D,y=t}| ∑ x∈D,y=tφ(x). Here R is a hyperparameter.
The rationale behind this selection instead of simply using the whole base classes Cbase is that, the episode classes CT are only overlapped with a portion of base classes (Zhang et al., 2021a), and
discriminative similarities are likely to be overwhelmed by the background signal especially when the number of base classes is large. After the surrogate classes are found, we re-index their feature centers to be {c′j}R̃j=1, R̃ ≤ R · K. Then, both support centers {ck}Kk=1 and query feature z are represented by the collection of similarities to these surrogate centers:
d′k = (d(ck, c ′ 1), ..., d(ck, c ′ R̃ )), k = 1, ...,K
d′ = (d(z, c′1), ..., d(z, c ′ R̃
)) (7)
where d′k,d ′ ∈ RR̃ are the surrogate representation for novel class k and query feature z, respectively. By surrogate representation, the prediction is found through ŷ = arg minkd(d ′,d′k) = arg mink||d′ − d′k||22. This set of discriminative distances is rewritten as d′′ ∈ RK such that d′′k = d(d
′,d′k). An illustration of the surrogate representation is shown in Figure 1 on MultiDigitMNIST, a demonstrative dataset.
Integrating Feature Representation and Surrogate Representation. Until now, we have two discriminative systems, i.e., feature-based d ∈ RK and surrogate-based d′′ ∈ RK . A natural idea is to combine them to form the following final criterion:
d̃ = β d
||d||1 + γ
d′′
||d′′||1 , (8)
where d and d′′ are normalized by their Manhattan norm, ||d||1 = ∑K k=1dk and ||d′′||1 = ∑K k=1d ′′ k , respectively, and β and γ are two hyperparameters adjusting the weights for feature representation and surrogate representation.
3.4 DEEPVORO: INTEGRATING MULTI-LEVEL HETEROGENEITY OF FSL
In this section we present DeepVoro, a fast geometric ensemble framework that unites our contributions to multiple stages of FSL, and show how it can be promoted to DeepVoro++ by incorporating surrogate representation.
Compositional Feature Transformation. It is believed that FSL algorithms favor features with more Gaussian-like distributions, and thus various kinds of transformations are used to improve the normality of feature distribution, including power transformation (Hu et al., 2021), Tukey’s Ladder of Powers Transformation (Yang et al., 2021), and L2 normalization (Wang et al., 2019). While these transformations are normally used independently, here we propose to combine several transformations sequentially in order to enlarge the expressivity of transformation function and to increase the polymorphism of the FSL process. Specifically, for a feature vector z, three kinds of transformations are considered: (I) L2 Normalization. By projection onto the unit sphere in Rn, the feature is normalized as: f(z) = z||z||2 . (II) Linear Transformation. Now since all the features are located on the unit sphere, we then can do scaling and shifting via a linear transformation: gw,b(z) = wz + b. (III) Tukey’s Ladder of Powers Transformation. Finally, Tukey’s Ladder of Powers Transformation
is applied on the feature: hλ(z) = { zλ if λ 6= 0 log(z) if λ = 0 . By the composition of L2 normalization, linear transformation, and Tukey’s Ladder of Powers Transformation, now the transformation function becomes (hλ ◦ gw,b ◦ f)(z) parameterized by w, b, λ. Multi-level Heterogeneities in FSL. Now we are ready to articulate the hierarchical heterogeneity existing in different stages of FSL. (I) Feature-level Heterogeneity: Data augmentation has been exhaustively explored for expanding the data size of FSL (Ni et al., 2021), including but not limited to rotation, flipping, cropping, erasing, solarization, color jitter, MixUp (Zhang et al., 2017), etc. The modification of image x will change the position of feature z in the feature space. We denote all possible translations of image as a set of functions {T}. (II) Transformation-level Heterogeneity: After obtaining the feature z, a parameterized transformation is applied to it, and the resulting features can be quite different for these parameters (see Figure F.1). We denote the set of all possible transformations to be {Pw,b,λ}. (III) Geometry-level Heterogeneity: Even with the provided feature, the few-shot classification model can still be diverse: whether a VD or PD-based model is used, whether the feature or the surrogate representation is adopted, and the setting of R will also change the degree of locality. We denote all possible models as {M}.
DeepVoro for Fast Geometric Ensemble of VDs. With the above three-layer heterogeneity, the FSL process can be encapsulated as (M◦Pw,b,λ◦φ◦T )(x), and all possible configurations of a given episode T with a fixed φ is the Cartesian product of these three sets: {T}×{Pw,b,λ}×{M}. Indeed, when a hold-out validation dataset is available, it can be used to find the optimal combination, but by virtue of ensemble learning, multiple models can still contribute positively to FSL (Dvornik et al., 2019). Since the cardinality of the resulting configuration set could be very large, the FSL model M as well as the ensemble algorithm is required to be highly efficient. The VD is a nonparametric model and no training is needed during the meta-testing stage, making it suitable for fast geometric ensemble. While CIVD models the cluster-to-point relationship via an influence function, here we further extend it so that cluster-to-cluster relationship can be considered. This motivates us to define Cluster-to-cluster Voronoi Diagram (CCVD): Definition 3.5 (Cluster-to-cluster Voronoi Diagram). Let Ω = {ω1, ..., ωK} be a partition of the space Rn, and C = {C1, ..., CK} be a set of totally ordered sets with the same cardinality L (i.e. |C1| = |C2| = ... = |CK | = L). The set of pairs {(ω1, C1), ..., (ωK , CK)} is a Cluster-to-cluster Voronoi Diagram (CCVD) with respect to the influence function F (Ck, C(z)), and each cell is obtained via ωr = {z ∈ Rn : r(z) = r}, r ∈ {1, ..,K}, with
r(z) = arg max k∈{1,...,K}
F (Ck, C(z)) (9)
where C(z) is the cluster (also a totally ordered set with cardinality L) that query point z belongs, which is to say, all points in this cluster (query cluster) will be assigned to the same cell. Similarly, the Influence Function is defined upon two totally ordered sets Ck = {c(i)k }Li=1 and C(z) = {z(i)}Li=1:
F (Ck, C(z)) = − sign(α) ∑L i=0 d(c (i) k , z (i))α. (10)
With this definition, now we are able to streamline our aforementioned novel approaches into a single ensemble model. Suppose there are totally L possible settings in our configuration pool {T} × {Pw,b,λ} × {M}, for all configurations {ρi}Li=1, we apply them onto the support set S to generate the K totally ordered clusters {{c(ρi)k }Li=1}Kk=1 including each center c (ρi) k derived through configuration ρi, and onto a query sample x to generate the query cluster C(z) = {z(ρ1), ...,z(ρL)}, and then plug these two into Definition 3.5 to construct the final Voronoi Diagram.
When only the feature representation is considered in the configuration pool, i.e. ρi ∈ {T} × {Pw,b,λ}, our FSL process is named as DeepVoro; if surrogate representation is also incorporated, i.e. ρi ∈ {T} × {Pw,b,λ} × {M}, DeepVoro is promoted to DeepVoro++ that allows for higher geometric diversity. See Appendix A for a summary of the notations and acronyms
4 EXPERIMENTS
The main goals of our experiments are to: (1) validate the strength of CIVD to integrate parametric and nonparametric classifiers and confirm the necessity of Voronoi reduc-
tion; (2) investigate how different levels of heterogeneity individually or collaboratively contribute to the overall result, and compare them with the state-of-art method; (3) reanalyze this ensemble when the surrogate representation comes into play, and see how it could ameliorate the extreme shortage of support samples. See Table 2 for a summary and Appendix D for the detailed descriptions of mini-ImageNet (Vinyals et al., 2016), CUB (Welinder et al., 2010), and tiered-ImageNet (Ren et al., 2018), that are used in this paper.
DeepVoro--: Integrating Parametric and Nonparametric Methods via CIVD. To verify our proposed CIVD model for the integration of parameter/nonparametric FSL classifiers, we first run three standalone models: Logistic Regressions with Power/Voronoi Diagrams as the underlining geometric structure (Power-LR/Voronoi-LR), and vanilla Voronoi Diagram (VD, i.e. nearest neighbor model), and then integrate VD with either Power/Voronoi-LR (see Appendix E for details). Interestingly, VD with the Power-LR has never reached the best result, suggesting that ordinary LR cannot
be integrated with VD due to their intrinsic distinct geometric structures. After the proposed Voronoi reduction (Theorem 3.1), however, VD+Voronoi-LR is able to improve upon both models in most cases, suggesting that CIVD can ideally integrate parameter and nonparametric models for better FSL.
DeepVoro: Improving FSL by Hierarchical Heterogeneities. In this section, we only consider two levels of heterogeneities for ensemble: feature-level and transformation-level. For feature-level ensemble, we utilize three kinds of image augmentations: rotation, flipping, and central cropping summing up to 64 distinct ways of data augmentation (Appendix F). For transformation-level ensemble, we use the proposed compositional transformations with 8 different combinations of λ and b that encourage a diverse feature transformations (Appendix F.1) without loss of accuracy (Figure 2). The size of the resulting configuration pool becomes 512 and DeepVoro’s performance is shown in Table 3. Clearly, DeepVoro outperforms all previous methods especially on 5-way 5-shot FSL. Specifically, DeepVoro is better than the next best by 2.18% (than Ni et al. (2021)) on miniImageNet, by 1.47% (than Hu et al. (2021)) on CUB, and by 1.02% (than Yang et al. (2021)) on tiered-ImageNet. Note that this is an estimated improvement because not all competitive methods here are tested with the same random seed and the number of episodes. More detailed results can be found in Appendix F. By virtue of CCVD and using the simplest VD as the building block, DeepVoro is arguably able to yield a consistently better result by the ensemble of a massive pool of independent VD. DeepVoro also exhibits high resistance to outliers, as shown in Figure K.16.
DeepVoro++: Further Improvement of FSL via Surrogate Representation. In surrogate representation, the number of neighbors R for each novel class and the weight β balancing surrogate/feature representations are two hyperparameters. With the help of an available validation set, a natural question is that whether the hyperparameter can be found through the optimization on the validation set, which requires a good generalization of the hyperparameters across different novel classes. From Figure K.13, the accuracy of VD with varying hyperparameter shows a good agreement between testing and validation sets. With this in mind, we select 10 combinations of β and R, guided by the validation set, in conjunction with 2 different feature transformations and 64 different image augmentations, adding up to a large pool of 1280 configurations for ensemble (denoted by DeepVoro++). As shown in Table 3, DeepVoro++ achieves best results for 1-shot FSL — 2.53%
Methods Geometric Structures Feat. Trans. Geo. L 5-way 1-shot 5-way 5-shot
A. mini-ImageNet 5-way 5-shot
B. mini-ImageNet 5-way 1-shot
C. CUB 5-way 5-shot
D. CUB 5-way 1-shot
higher than Zhang et al. (2020b), 2.38% higher than Hu et al. (2021), and 1.09% higher than Zhang et al. (2020b), on three datasets, respectively, justifying the efficacy of our surrogate representation. See Appendix G for more detailed analysis.
Ablation Experiments and Running Time. Table 4 varies the level of heterogeneity (see Table F.4 and G.5 for all datasets). The average accuracy of VDs without CCVD integration is marked by ], and is significantly lower than the fully-fledged ensemble. Table 5 presents the running time of DeepVoro(++) benchmarked in a 20-core Intel© CoreTM i7 CPU with NumPy (v1.20.3), whose efficiency is comparable to DC/S2M2 2, even with >1000× diversity.
Experiments with Different Backbones, Meta-training Protocols, and Domains. Because different feature extraction backbones, meta-training losses, and degree of discrepancy between the source/target domains will all affect the downstream FSL, we here examine the robustness of DeepVoro/DeepVoro++ under a number of different circumstances, and details are shown in Appendices H, I, J. Notably, DeepVoro/DeepVoro++ attains the best performance by up to 5.55%, and is therefore corroborated as a superior method for FSL, regardless of the backbone, training loss, or domain.
5 CONCLUSION
In this paper, our contribution is threefold. We first theoretically unify parametric and nonparametric few-shot classifiers into a general geometric framework (VD) and show an improved result by virtue of this integration (CIVD). By extending CIVD to CCVD, we present a fast geometric ensemble method (DeepVoro) that takes into consideration thousands of FSL configurations with high efficiency. To deal with the extreme data insufficiency in one-shot learning, we further propose a novel surrogate representation which, when incorporated into DeepVoro, promotes the performance of one-shot learning to a higher level (DeepVoro++). In future studies, we plan to extend our geometric approach to meta-learning-based FSL and lifelong FSL.
ACKNOWLEDGMENTS
This research was supported in part by NSF through grant IIS-1910492.
REPRODUCIBILITY STATEMENT
Our code as well as data split, random seeds, hyperparameters, scripts for reproducing the results in the paper are available at https://github.com/horsepurve/DeepVoro.
A NOTATIONS AND ACRONYMS
Parameters for feature-level , transformation-level , and geometry-level heterogeneity are shown
in yellow , blue , and red , respectively. See Sec. F for implementation details. †Here PD is reduced to VD by Theorem 3.1. ‡For every λ (or R), the b (or β) value with the highest validation accuracy is introduced into the configuration pool.
Methods GeometricStructures Centers Tunable Param. # Description
DeepVoro-- CIVD Ck = {ck, c̃k} ck from VD c̃k from PD†
− − −
DeepVoro CCVD Ck = {c (ρi) k }Li=1
ρi ∈ {T} × {Pw,b,λ}
angle of rotation 4 − flipping or not 2 − scaling & cropping 8 − w = 1 − scale factor in linear transformation b 4 shift factor in linear transformation λ 2 exponent in powers transformation
#configurations L = 512
DeepVoro++ CCVD Ck = {c (ρi) k }Li=1
ρi ∈ {T} × {Pw,b,λ} × {M}
angle of rotation 4 − flipping or not 2 − scaling & cropping 8 − w = 1 − scale factor in linear transformation b 1‡ shift factor in linear transformation λ 2 exponent in powers transformation R 10 the number of top-R nearest baseprototypes for a novel prototype γ = 1 − weight for surrogate representation β 1‡ weight for feature representation
#configurations L = 1280
B POWER DIAGRAM SUBDIVISION AND VORONOI REDUCTION
B.1 PROOF OF THEOREM 3.1
Lemma B.1. The vertical projection from the lower envelope of the hyperplanes {Πk(z) : W Tk z+ bk}Kk=1 onto the input space Rn defines the cells of a PD.
Theorem 3.1 (Voronoi Diagram Reduction). The linear classifier parameterized by W , b partitions the input space Rn to a Voronoi Diagram with centers {c̃1, ..., c̃K} given by c̃k = 12Wk if bk = − 14 ||Wk|| 2 2, k = 1, ...,K.
Proof. We first articulate Lemma B.1 and find the exact relationship between the hyperplane Πk(z) and the center of its associated cell in Rn. By Definition 3.1, the cell for a point z ∈ Rn is found by comparing d(z, ck)2 − νk for different k, so we define the power function p(z, S) expressing this value
p(z, S) = (z − u)2 − r2 (11)
in which S ⊆ Rn is a sphere with center u and radius r. In fact, the weight ν associated with a center in Definition 3.1 can be intepreted as the square of the radius r2. Next, let U denote a paraboloid y = z2, let Π(S) be the transform that maps sphere S with center u and radius r into hyperplane
Π(S) : y = 2z · u− u · u + r2. (12)
It can be proved that Π is a bijective mapping between arbitrary spheres in Rn and nonvertical hyperplanes in Rn+1 that intersect U (Aurenhammer, 1987). Further, let z′ denote the vertical projection of z onto U and z′′ denote its vertical projection onto Π(S), then the power function can be written as
p(z, S) = d(z, z′)− d(z, z′′), (13)
which implies the following relationships between a sphere in Rn and an associated hyperplane in Rn+1 (Lemma 4 in Aurenhammer (1987)): let S1 and S2 be nonco-centeric spheres in Rn, then the bisector of their Power cells is the vertical projection of Π(S1) ∩ Π(S2) onto Rn. Now, we have a direct relationship between sphere S, and hyperplane Π(S), and comparing equation (12) with the hyperplanes used in logistic regression {Πk(z) : W Tk z + bk}Kk=1 gives us
u = 1
2 Wk
r2 = bk + 1
4 ||Wk||22.
(14)
Although there is no guarantee that bk + 14 ||Wk|| 2 2 is always positive for an arbitrary logistic regression model, we can impose a constraint on r2 to keep it be zero during the optimization, which implies
bk = − 1
4 ||Wk||22. (15)
By this way, the radii for allK spheres become identical (all zero). After the optimization of logistic regression model, the centers { 12Wk} K k=1 will be used for CIVD integration.
C DETAILS ABOUT THE DEMONSTRATIVE EXAMPLE ON MULTIDIGITMNIST DATASET
MultiDigitMNIST (Sun, 2019) dataset is created by concatenating two (or three) digits of different classes from MNIST for few-shot image classification. Here we use DoubleMNIST Datasets (i.e. two digits in an image) consisting of 100 classes (00 to 09), 1000 images of size 64 × 64 × 1 per class, and the classes are further split into 64, 20, and 16 classes for training, testing, and validation, respectively. To better embed into the R2 space, we pick a ten-classes subset (00, 01, 12, 13, 04, 05, 06, 77, 08, and 09) as the base classes for meta-training, and another five-class subset (02, 49, 83, 17, and 36) for one episode. The feature extractor is a 4-layer convolutional network with an additional fully-connected layer for 2D embedding. In Figure 1 left panel, the VD is obtained by setting the centroid of each base class as the Voronoi center. For each novel class, the Voronoi center is simply the 1-shot support sample (Figure 1 central panel). The surrogate representation is computed as the collection of distances from a support/query sample to each of the base classes, as shown in Figure 1 right panel. Interestingly, the surrogate representations for a novel class, no matter if it is a support sample (dotted line) or a query sample (colored line) generally follow a certain pattern — akin within a class, distinct cross class — make them ideal surrogates for distinguishing between different novel classes. In our paper, we design a series of algorithms answering multiple questions regarding this surrogate representation: how to select base classes for the calculation of surrogate representation, how to combine it with feature representation, and how to integrate it into the overall ensemble workflow.
D MAIN DATASETS
For a fair and thorough comparison with previous works, three widely-adopted benchmark datasets are used throughout this paper.
(1) mini-ImageNet (Vinyals et al., 2016) is a shrunk subset of ILSVRC-12 (Russakovsky et al., 2015), consists of 100 classes in which 64 classes for training, 20 classes for testing and 16 classes for validation. Each class has 600 images of size 84× 84× 3. (2) CUB (Welinder et al., 2010) is another benchmark dataset for FSL, especially fine-grained FSL, including 200 species (classes) of birds. CUB is an unbalanced dataset with 58 images in average per class, also of size 84 × 84 × 3. We split all classes into 100 base classes, 50 novel classes, and 50 validation classes, following previous works (Chen et al., 2019a).
(3) tiered-ImageNet (Ren et al., 2018) is another subset of ILSVRC-12 (Russakovsky et al., 2015) but has more images, 779,165 images in total. All images are categorized into 351 base classes, 97 validation classes, and 160 novel classes. The number of images in each class is not always the same, 1281 in average. The image size is also 84× 84× 3.
E DEEPVORO--: INTEGRATING PARAMETRIC AND NONPARAMETRIC METHODS VIA CIVD
Table E.3: Cluster-induced Voronoi Diagram (CIVD) for the integration of parametric Logistic Regression (LR) and nonparametric nearest neighbor (i.e. Voronoi Diagram, VD) methods. The results from S2M2 R and DC are also included in this table but excluded for comparison. Best result is marked in bold.
Methods mini-Imagenet CUB tiered-ImageNet
5-way 1-shot 5-way 5-shot 5-way 1-shot 5-way 5-shot 5-way 1-shot 5-way 5-shot
S2M2 R 64.65 ± 0.45 83.20 ± 0.30 80.14 ± 0.45 90.99 ± 0.23 68.12 ± 0.52 86.71 ± 0.34 DC 67.79 ± 0.45 83.69 ± 0.31 79.93 ± 0.46 90.77 ± 0.24 74.24 ± 0.50 88.38 ± 0.31 Power-LR 65.45 ± 0.44 84.47 ± 0.29 79.66 ± 0.44 91.62 ± 0.22 73.57 ± 0.48 89.07 ± 0.29 Voronoi-LR 65.58 ± 0.44 84.51 ± 0.29 79.63 ± 0.44 91.61 ± 0.22 73.65 ± 0.48 89.15 ± 0.29 VD 65.37 ± 0.44 84.37 ± 0.29 78.57 ± 0.44 91.31 ± 0.23 72.83 ± 0.49 88.58 ± 0.29
CIVD-based DeepVoro--
VD + Power-LR 65.63 ± 0.44 84.25 ± 0.30 79.52 ± 0.43 91.52 ± 0.22 73.68 ± 0.48 88.71 ± 0.29 VD + Voronoi-LR 65.85 ± 0.43 84.66 ± 0.29 79.40 ± 0.44 91.57 ± 0.22 73.78 ± 0.48 89.02 ± 0.29
E.1 EXPERIMENTAL SETUP AND IMPLEMENTATION DETAILS
In this section, we first establish three few-shot classification models with different underlying geometric structures, two logistic regression (LR) models and one nearest neighbor model: (1) Power Diagram-based LR (Power-LR), (2) Voronoi Diagram-based LR (Voronoi-LR), and (3) Voronoi Diagram (VD). Then, the main purposes of our analysis are (1) to examine how the performance is affected by the proposed Voronoi Reduction method in Sec. 3.2, and (2) to inspect whether VD can be integrated with Power/Voronoi Diagram-based LRs.
The feature transformation used throughout this section is Pw,b,λ with w = 1.0, b = 0.0, λ = 0.5. For Power-LR, we train it directly on the transformedK-wayN -shot support samples using PyTorch library with an Adam optimizer with batch size at 64 and learning rate at 0.01. For Voronoi-LR, the vanilla LR is retrofitted as shown in Algorithm 1, in which the bias is given by Theorem 3.1 to make sure that the parameters induce a VD in each iteration.
In our CIVD model in Definition 3.2, we use a cluster instead of a single prototype to stand for a novel class. Here this cluster contains two points, i.e. Ck = {ck, c̃k}, in which ck is obtained from VD, and c̃k is acquired from Power-LR or Voronoi-LR. The question we intend to answer here is that whether Power-LR or Voronoi-LR is the suitable model for the integration.
Algorithm 1: Voronoi Diagram-based Logistic Regression. Data: Support Set S Result: W 1 Initialize W ←W (0); 2 for epoch← 1, ..., #epoch do 3 bk ← − 14 ||Wk|| 2 2,∀k = 1, ...,K ; / Apply Theorem 3.1 4 Compute loss L(W , b) ; / forward propagation 5 Update W ; / backward propagation 6 end 7 return W
40 20 0 20
60
40
20
0
20
40
60
80 A. No Transformation
40 20 0 20 60
40
20
0
20
40
60
80 B. L2 Normalization
100 50 0
40
20
0
20
40
C. Power Transformation
40 60 80
40
20
0
20
40
D. Log Transformation
Figure F.1: The t-SNE visualizations of (A) original features, (B) L2 normalization, (C) Tukey’s Ladder of Powers Transformation with λ = 0.5, and (D) compositional transformation with λ = 0, w = 1, b = 0.04 of 5 novel classes from mini-ImageNet dataset.
E.2 RESULTS
The results are shown in Table E.3. Interestingly, when integrated with VD, Power-LR never reaches the best result, suggesting that VD and LR are intrinsic different geometric models, and cannot be simply integrated together without additional effort. On mini-ImageNet and tiered-ImageNet datasets, the best results are achieved by either Voronoi-LR or VD+Voronoi-LR, showing that CIVD coupled with the proposed Voronoi reduction can ideally integrate parametric and nonparametric few-shot models. Notably, on these two datasets, when Power-LR is reduced to Voronoi-LR, although the number of parameters is decreased (b is directly given by Theorem 3.1, not involved in the optimization), the performance is always better, for example, increases from 65.45% to 65.58% on 5-way 1-shot mini-ImageNet data. On CUB dataset, results of different models are similar, probably because CUB is a fine-grained dataset and all classes are similar to each other (all birds).
F DEEPVORO: IMPROVING FSL VIA HIERARCHICAL HETEROGENEITIES
F.1 EXPERIMENTAL SETUP AND IMPLEMENTATION DETAILS
In this section we describe feature-level and transformation-level heterogeneities that are used for ensemble in order to improve FSL. See the next section for geometry-level heterogeneity.
Feature-level heterogeneity. Considering the reproducibility of the methodology, we only employ deterministic data augmentation upon the images without randomness involved. Specifically, three kinds of data augmentation techniques are used. (1) Rotation is an important augmentation method widely used in self-supervised learning (Mangla et al., 2020). Rotating the original images by 0°, 90°, 180°, and 270°gives us four ways of augmentation. (2) After rotation, we can flip the images horizontally, giving rise to additional two choices after each rotation degree. (3) Central cropping after scaling can alter the resolution and focus area of the image. Scaling the original images to (84+B)×(84+B),B increasing from 0 to 70 with step 10, bringing us eight ways of augmentation.
Finally, different combinations of the three types result in 64 kinds of augmentation methods (i.e. |{T}| = 64). Transformation-level heterogeneity. In our compositional transformation, the function (hλ◦gw,b◦ f)(z) is parameterized by w, b, λ. Since g is appended after the L2 normalization f , the vector comes into g is always a unit vector, so we simply set w = 1. For the different combinations of λ and b, we test different values with either λ = 0 or λ 6= 0 on the hold-out validation set (as shown in Figure 2 and K.12), and pick top-8 combinations with the best performance on the validation set.
Ensemble Schemes. Now, in our configuration pool {T} × {Pw,b,λ}, there are 512 possible configurations {ρ(i)}512i=1. For each ρ, we apply it on both the testing and the validation sets. With this large pool of ensemble candidates, how and whether to select a subset {ρ(i)}L′i=1 ⊆ {ρ(i)}512i=1 is still a nontrivial problem. Here we explore three different schemes. (1) Full (vanilla) ensemble. All candidates in {ρ(i)}512i=1 are taken into consideration and then plugged into Definition 3.5 to build the CIVD for space partition. (2) Random ensemble. A randomly selected subset with size L′ < L is used for ensemble. (3) Guided ensemble. We expect the performance for {ρ(i)}512i=1 on the validation set can be used to guide the selection of {ρ(i)}L′i=1 from the testing set, provided that there is good correlation between the testing set and the validation set. Specifically, we rank the configurations in the validation set with regard to their performance, and add them sequentially into {ρ(i)}L′i=1 until a maximum ensemble performance is reached on the validation set, then we use this configuration set for the final ensemble. Since VD is nonparametric and fast, we adopt VD as the building block and only use VD for each ρ for the remaining part of the paper. The α value in the influence function (Definition 3.3) is set at 1 throughout the paper, for the simplicity of computation.
For a fair comparison, we downloaded the trained models1 used by Mangla et al. (2020) and Yang et al. (2021). The performance of FSL algorithms is typically evaluated by a sequence of independent episodes, so the data split and random seed for the selection of novel classes as well as support/query set in each episode will all lead to different result. To ensure the fairness of our evaluation, DC (Yang et al., 2021), and S2M2 R (Mangla et al., 2020) are reevaluated with the same data split and random seed as DeepVoro. The results are obtained by running 2000 episodes and the average accuracy as well as 95% confidence intervals are reported.
F.2 RESULTS
Table F.4: Ablation study of DeepVoro’s performance with different levels of ensemble. The number of ensemble members are given in parentheses.
Methods Feature-level Transformation-level mini-ImageNet CUB tiered-ImageNet
1-shot 5-shot 1-shot 5-shot 1-shot 5-shot
No Ensemble 8 8 65.37 ± 0.44 84.37 ± 0.29 78.57 ± 0.44 91.31 ± 0.23 72.83 ± 0.49 88.58 ± 0.29 Vanilla Ensemble (8) 8 4 66.45 ± 0.44 84.55 ± 0.29 80.98 ± 0.44 91.47 ± 0.22 74.02 ± 0.49 88.90 ± 0.29 Vanilla Ensemble (64) 4 8 67.88 ± 0.45 86.39 ± 0.29 77.30 ± 0.43 91.26 ± 0.23 73.74 ± 0.49 88.67 ± 0.29 Vanilla Ensemble (512) 4 4 69.23 ± 0.45 86.70 ± 0.28 79.90 ± 0.43 91.70 ± 0.22 74.51 ± 0.48 89.11 ± 0.29 Random Ensemble (512) 4 4 69.30 ± 0.45 86.74 ± 0.28 80.40 ± 0.43 91.94 ± 0.22 74.64 ± 0.48 89.15 ± 0.29 Guided Ensemble (512) 4 4 69.48 ± 0.45 86.75 ± 0.28 82.99 ± 0.43 92.62 ± 0.22 74.98 ± 0.48 89.40 ± 0.29
Our proposed compositional transformation enlarges the expressivity of the transformation function. When the Tukey’s ladder of powers transformation is used individually, as reported in Yang et al. (2021), the optimal λ is not 0, but if an additional linear transformation g is inserted between f and h, λ = 0 coupled with a proper b can give even better result, as shown in Figure 2 and K.12. Importantly, from Figure 2, a combination of λ and b with good performance on the validation set can also produce satisfactory result on the testing set, suggesting that it is possible to optimize the hyperparameters on the validation set and generalize well on the testing set. In terms of the polymorphism induced by various transformations in the feature space, Figure F.1 exhibits the t-SNE visualizations of the original features and the features after three different kinds of transformations, showing that the relative positions of different novel classes is largely changes especially after compositional transformation (as shown in D). Besides commonly used data augmentation, this transformation provides another level of diversity that may be beneficial to the subsequent ensemble.
The results for different levels of ensemble are shown in Table F.4, in which the number of ensemble member are also indicated. Although transformation ensemble does not involve any change to the feature, it can largely improve the results for 1-shot FSL, from 65.37% to 66.45% on mini-ImageNet,
1downloaded from https://github.com/nupurkmr9/S2M2_fewshot
from 78.57% to 80.98% on CUB, and from 72.83% to 74.02% on tiered-ImageNet, respectively, probably because 1-shot FSL is more prone to overfitting due to its severe data deficiency. Featurelevel ensemble, on the other hand, is more important for 5-shot FSL, especially for mini-ImageNet. When combining the two levels together, the number of ensemble members increases to 512 and the performance significantly surpasses each individual level. On all three datasets, the guided ensemble scheme always achieves the best result for both single-shot and multi-shot cases, showing that the validation set can indeed be used for the guidance of subset selection and our method is robust cross classes in the same domain. When there is no such validation set available, the full ensemble and random ensemble schemes can also give comparable result.
To inspect how performance changes with different number of ensemble members, we exhibit the distribution of accuracy at three ensemble levels for mini-ImageNet in Figure F.2 and F.3 , for CUB in Figure F.4 and F.5, and for tiered-ImageNet in Figure F.6 and F.7. Figure (b) in each of them also exhibits the correlation between the testing and validation sets for all 512 configurations. Clearly, better result is often reached when there are more configurations for the ensemble, validating the efficacy of our method for improving the performance and robustness for better FSL. Algorithm 2: VD with Surrogate Representation for Episode T .
Data: Base classes D, Support Set S = {(xi, yi)}K×Ni=1 , yi ∈ CT , query sample x Result: d̃
1 D′ ← (Pw,b,λ ◦ φ ◦ T )(D) ; / Extract and transform feature 2 S ′ ← (Pw,b,λ ◦ φ ◦ T )(S); 3 z ← (Pw,b,λ ◦ φ ◦ T )(x); 4 for t← 1, ..., |Cbase|; / Compute prototypes of base classes 5 do 6 c′t ← 1|{(z′,y)|z′∈D′,y=t}| ∑ z′∈D′,y=tz ′ 7 end 8 for k ← 1, ...,K; / Compute prototypes from support samples 9 do
10 ck ← 1N ∑ z′∈S′,y=k z ′; 11 dk ← d(z, ck) 12 end 13 Csurrogate ← ∅; 14 for k ← 1, ...,K; / Find surrogate classes 15 do 16 Csurrogate ← Csurrogate ⋃ Top-R
t∈{1,...,|Cbase|} d(ck, c
′ t)
17 end 18 R̃← |Csurrogate|; 19 d′ ← (d(z, c′1), ..., d(z, c′R̃)) ; / Compute surrogate representation for query sample 20 for k ← 1, ...,K; / Compute surrogate representations for support samples 21 do 22 d′k ← (d(ck, c′1), ..., d(ck, c′R̃)); 23 d′′k ← d(d′,d′k) 24 end 25 d̃← β d||d||1 + γ d′′ ||d′′||1 ; / Compute final criterion 26 return d̃
1 2 3 4 5 6 7 8 Number of Ensemble Members
84.35
84.40
84.45
84.50
84.55
84.60
Ac cu
ra cy
mini-ImageNet transformation-level Ensemble Full Ensemble
(a) Transformation-level Ensemble
82 83 84 85 86 87 Testing Set Accuracy
85.0
85.5
86.0
86.5
87.0
87.5
88.0
88.5
Va lid
at io
n Se
t A cc
ur ac
y
Transformation-level Feature-level Dirichlet Tessellation Ensemble DC S2M2-R
mini-ImageNet 5-way 5-shot
Ensemble Ensemble
eepVoro
(b) Testing/Validation Sets Correlation
1 2 3 4 5 6 7 8 9 10111213141516171819202122232425262728293031323334353637383940414243444546474849505152535455565758596061626364 Number of Ensemble Members
84.0
84.5
85.0
85.5
86.0
86.5
Ac cu
ra cy
mini-ImageNet feature-level Ensemble
Full Ensemble
(c) Feature-level Ensemble on 5-way 5-shot mini-ImageNet Dataset
0 100 200 300 400 500 Number of Ensemble Members
84.5
85.0
85.5
86.0
86.5
Ac cu
ra cy
mini-ImageNet Dirichlet Tessellation Ensemble
Random Ensemble Guided Ensemble Full Ensemble
(d) DeepVoro on 5-way 5-shot mini-ImageNet Dataset
Figure F.2: Three levels of ensemble and the correlation between testing and validation sets with different configurations in the configuration pool.
1 2 3 4 5 6 7 8 Number of Ensemble Members
65.4
65.6
65.8
66.0
66.2
66.4
66.6
66.8
Ac cu
ra cy
mini-ImageNet transformation-level Ensemble
Full Ensemble
(a) Transformation-level Ensemble
63 64 65 66 67 68 69 Testing Set Accuracy
66
67
68
69
70
71
72
Va lid
at io
n Se
t A cc
ur ac
y
Transformation-level Feature-level Dirichlet Tessellation Ensemble DC S2M2-R
mini-ImageNet 5-way 1-shot
Ensemble Ensemble
eepVoro
(b) Testing/Validation Sets Correlation
1 2 3 4 5 6 7 8 9 10111213141516171819202122232425262728293031323334353637383940414243444546474849505152535455565758596061626364 Number of Ensemble Members
65.0
65.5
66.0
66.5
67.0
67.5
68.0
Ac cu
ra cy
mini-ImageNet feature-level Ensemble
Full Ensemble
(c) Feature-level Ensemble on 5-way 1-shot mini-ImageNet Dataset
0 100 200 300 400 500 Number of Ensemble Members
66.0
66.5
67.0
67.5
68.0
68.5
69.0
69.5
Ac cu
ra cy
mini-ImageNet Dirichlet Tessellation Ensemble
Random Ensemble Guided Ensemble Full Ensemble
(d) DeepVoro on 5-way 1-shot mini-ImageNet Dataset
Figure F.3: Three levels of ensemble and the correlation between testing and validation sets with different configurations in the configuration pool.
1 2 3 4 5 6 7 8 Number of Ensemble Members
90.6
90.8
91.0
91.2
91.4
91.6
Ac cu
ra cy
CUB transformation-level Ensemble Full Ensemble
(a) Transformation-level Ensemble
82 84 86 88 90 92 Testing Set Accuracy
80
82
84
86
88
90
92
Va lid
at io
n Se
t A cc
ur ac
y
Transformation-level Feature-level Dirichlet Tessellation Ensemble DC S2M2-R
CUB 5-way 5-shot
Ensemble Ensemble
eepVoro
(b) Testing/Validation Sets Correlation
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 Number of Ensemble Members
84
86
88
90
92
Ac cu
ra cy
CUB feature-level Ensemble Full Ensemble
(c) Feature-level Ensemble on 5-way 5-shot CUB Dataset
0 100 200 300 400 500 Number of Ensemble Members
86
88
90
92
Ac cu
ra cy
CUB Dirichlet Tessellation Ensemble
Random Ensemble Guided Ensemble Full Ensemble
(d) DeepVoro on 5-way 5-shot CUB Dataset
Figure F.4: Three levels of ensemble and the correlation between testing and validation sets with different configurations in the configuration pool.
1 2 3 4 5 6 7 8 Number of Ensemble Members
79.50
79.75
80.00
80.25
80.50
80.75
81.00
81.25
81.50
Ac cu
ra cy
CUB transformation-level Ensemble Full Ensemble
(a) Transformation-level Ensemble
62.5 65.0 67.5 70.0 72.5 75.0 77.5 80.0 82.5 Testing Set Accuracy
62.5
65.0
67.5
70.0
72.5
75.0
77.5
80.0
82.5
Va lid
at io
n Se
t A cc
ur ac
y
Transformation-level Feature-level Dirichlet Tessellation Ensemble DC S2M2-R
CUB 5-way 1-shot
Ensemble Ensemble
eepVoro
(b) Testing/Validation Sets Correlation
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 Number of Ensemble Members
68
70
72
74
76
78
80
Ac cu
ra cy
CUB feature-level Ensemble Full Ensemble
(c) Feature-level Ensemble on 5-way 1-shot CUB Dataset
0 100 200 300 400 500 Number of Ensemble Members
67.5
70.0
72.5
75.0
77.5
80.0
82.5
Ac cu
ra cy
CUB Dirichlet Tessellation Ensemble
Random Ensemble Guided Ensemble Full Ensemble
(d) DeepVoro on 5-way 1-shot CUB Dataset
Figure F.5: Three levels of ensemble and the correlation between testing and validation sets with different configurations in the configuration pool.
1 2 3 4 5 6 7 8 Number of Ensemble Members
88.65
88.70
88.75
88.80
88.85
88.90
88.95
Ac cu
ra cy
tiered-ImageNet transformation-level Ensemble
Full Ensemble
(a) Transformation-level Ensemble
85 86 87 88 89 90 Testing Set Accuracy
83
84
85
86
87
Va lid
at io
n Se
t A cc
ur ac
y
Transformation-level Feature-level Dirichlet Tessellation Ensemble DC S2M2-R
tiered-ImageNet 5-way 5-shot
Ensemble Ensemble
eepVoro
(b) Testing/Validation Sets Correlation
1 2 3 4 5 6 7 8 9 10111213141516171819202122232425262728293031323334353637383940414243444546474849505152535455565758596061626364 Number of Ensemble Members
86.5
87.0
87.5
88.0
88.5
89.0
Ac cu
ra cy
tiered-ImageNet feature-level Ensemble
Full Ensemble
(c) Feature-level Ensemble on 5-way 5-shot tiered-ImageNet Dataset
0 100 200 300 400 500 Number of Ensemble Members
85
86
87
88
89
Ac cu
ra cy
tiered-ImageNet Dirichlet Tessellation Ensemble
Random Ensemble Guided Ensemble Full Ensemble
(d) DeepVoro on 5-way 5-shot tiered-ImageNet Dataset
Figure F.6: Three levels of ensemble and the correlation between testing and validation sets with different configurations in the configuration pool.
1 2 3 4 5 6 7 8 Number of Ensemble Members
73.2
73.4
73.6
73.8
74.0
Ac cu
ra cy
tiered-ImageNet transformation-level Ensemble
Full Ensemble
(a) Transformation-level Ensemble
68 69 70 71 72 73 74 75 Testing Set Accuracy
65
66
67
68
69
70
71
72
73
Va lid
at io
n Se
t A cc
ur ac
y
Transformation-level Feature-level Dirichlet Tessellation Ensemble DC S2M2-R
tiered-ImageNet 5-way 1-shot
Ensemble Ensemble
eepVoro
(b) Testing/Validation Sets Correlation
1 2 3 4 5 6 7 8 9 10111213141516171819202122232425262728293031323334353637383940414243444546474849505152535455565758596061626364 Number of Ensemble Members
71.0
71.5
72.0
72.5
73.0
73.5
74.0
Ac cu
ra cy
tiered-ImageNet feature-level Ensemble
Full Ensemble
(c) Feature-level Ensemble on 5-way 1-shot tiered-ImageNet Dataset
0 100 200 300 400 500 Number of Ensemble Members
70
71
72
73
74
75
Ac cu
ra cy
tiered-ImageNet Dirichlet Tessellation Ensemble
Random Ensemble Guided Ensemble Full Ensemble
(d) DeepVoro on 5-way 1-shot tiered-ImageNet Dataset
Figure F.7: Three levels of ensemble and the correlation between testing and validation sets with different configurations in the configuration pool.
0 100 200 300 400 Number of Shots
65
70
75
80
85
90
Ac cu
ra cy
1: 65.31%
3: 80.12%
5: 84.05% 7: 85.95% 10: 87.60% 15: 88.91%20: 89.75%
40: 90.63% 100: 91.18% 200: 91.22% 400: 91.55%
Effect of the number of shots on mini-ImageNet dataset
Figure G.8: The accuracy of VD with increasing number of shots on mini-ImageNet dataset.
G DEEPVORO++: FURTHER IMPROVEMENT OF FSL VIA SURROGATE REPRESENTATION
G.1 EXPERIMENTAL SETUP AND IMPLEMENTATION DETAILS
In this section, we introduce another layer of heterogeneity, that is, geometry-level, that exists in our surrogate representation. In Definition 3.4, increasing R will enlarge the degree of locality when searching for the top-R surrogate classes. In equation (8), if we set γ = 1 then increasing β will make the model rely more on the feature representation and less on the surrogate representation. In order to weigh up R and β, we perform a grid search for different combinations of R and β on the validation set, as shown in Figure K.13, K.14, and K.15. For each R, we select the β that gives rise to the best result on the validation set, and use this (R, β) on the testing set, resulting in 10 such pairs in total. So there are 10 models in the geometry-level heterogeneity, standing for different degrees of locality. In conjunction with feature-level (64 kinds of augmentations) and transformation-level (here only the top-2 best transformations are used) heterogeneities, now there are 1280 different kinds of configurations in our configuration pool that will be used by the CCVD model. In conclusion, there are overall 512 + 1280 = 1792 configurations for a few-shot episode. Generating∼1800 ensemble candidates is nearly intractable for parametric methods like logistic regression or cosine classifier, which may consume e.g. months for thousands of episodes. However, the VD model is nonparametric and highly efficient, making it empirically possible to collect all the combinations and integrate them all together via CCVD. The complete algorithm for the computation of surrogate representation is shown in Algorithm 2.
G.2 RESULTS
The heatmaps for different (R, β) pairs on testing/validation sets are shown in Figure K.13 for miniImageNet, in Figure K.14 for CUB, and in Figure K.15 for tiered-ImageNet, respectively. Basically, the testing and validation set follow the same pattern. When R is small, i.e. only a small number of base classes are used for surrogate, then a higher weight should be placed on feature representation. With a fixed β, increasing R beyond a certain threshold will potentially cause a drop in accuracy, probably because the meaningful similarities is likely to be overwhelmed by the signals from the large volume of irrelevant base classes.
Table G.5: Ablation study of DeepVoro++’s performance with different levels of ensemble. The number of ensemble members are given in parentheses.
Methods Feature-level Transformation-level Geometry-level mini-ImageNet CUB tiered-ImageNet
No Ensemble 8 8 8 65.37 ± 0.44 78.57 ± 0.44 72.83 ± 0.49 Vanilla Ensemble (20) 8 4 4 68.38 ± 0.46 80 | 1. What is the focus and contribution of the paper on few-shot learning?
2. What are the strengths and weaknesses of the proposed CIVD-based approach?
3. Do you have any concerns regarding the terminology and definitions used in the paper?
4. What are the errors found in the equations and figures presented in the paper?
5. How can the paper be restructured to answer the obvious questions regarding the definition and modification of the DeepVoro(++) method?
6. What are the relevant ablative results that can support the main contributions of the paper?
7. How can the paper improve its presentation and clarity to make it easier to follow the line of arguments? | Summary Of The Paper
Review | Summary Of The Paper
The paper proposes a CIVD-based approach to few-shot learning. CIVD, cluster-induced Voronoi diagrams, are a known technique that is used to categorize / describe different types of few-shot classifiers. In the experiment section DeepVoro(++) is shown to perform superior to other methods on three datasets.
Review
The paper is written in a way that makes it extremely hard to read. The text is not sufficiently self-contained to follow the line of arguments, terms and methods are not always well-defined, and many side-tracks with questionable relevance are considered. Most importantly, the evaluated methods DeepVoro(++) are never defined in the method part.
Terminology is sometimes used in unusual ways, e.g. I would identify data augmentation rather with what is said under point 3 in section 1 than GANs and VAEs in the top of the page.
Figure 1 is confusing. Usually, when adding new classes, old ones are not supposed to be forgotten (cmp. left and center panel).
Some equations contain trivial errors, e.g. the optimization below (1) should be a minimization of distance.
Some presentations confuse unnecessarily, e.g. (3) where an offset/bias for the distance is introduced, but it is called weight. The text below uses Dirichlet tessellation as a special case of PD, whereas before (3) they are identified.
The datasets used for the comparisons should be described in the main paper, not the supplementary material.
The overall results look impressive, but the paper needs a restructuring such that the following obvious questions are answered: How exactly are the methods DeepVoro(++) defined, what is their baseline and what modifications have been made? How are these modifications motivated from theory or experiments? Which of these modifications or theoretical insights are the main contributions of the paper?
In order to address these questions, include the dataset descriptions and the most relevant ablative results, section 3 needs to be streamlined and less relevant theoretical considerations need to be removed or moved to the supplementary material. |
ICLR | Title
Volumetric Disentanglement for 3D Scene Manipulation
Abstract
Recently, advances in differential volumetric rendering enabled significant breakthroughs in the photo-realistic and fine-detailed reconstruction of complex 3D scenes, which is key for many virtual reality applications. However, in the context of augmented reality, one may also wish to effect semantic manipulations or augmentations of objects within a scene. To this end, we propose a volumetric framework for (i) disentangling or separating, the volumetric representation of a given foreground object from the background, and (ii) semantically manipulating the foreground object, as well as the background. Our framework takes as input a set of 2D masks specifying the desired foreground object for training views, together with the associated 2D views and poses, and produces a foreground-background disentanglement that respects the surrounding illumination, reflections, and partial occlusions, which can be applied to both training and novel views. Unlike previous work, our method does not rely on 3D information in the form of 3D object bounding boxes or a scene voxel grid. It correctly captures reflective foreground objects, objects occluded by the background, and objects with noisy and inaccurate masks. Our method enables the separate control of pixel color and depth as well as 3D similarity transformations of both the foreground and background objects. We subsequently demonstrate our framework’s applicability on several downstream manipulation tasks, going beyond the placement and movement of foreground objects. These tasks include object camouflage, non-negative 3D object inpainting, 3D object translation, 3D object inpainting, and 3D text-based object manipulation.
1 INTRODUCTION
The ability to interact with a 3D environment is of fundamental importance for many augmented reality (AR) application domains such as interactive visualization, entertainment, games, and robotics Mekni & Lemieux (2014). Such interactions are often semantic in nature, capturing specified entities in a 3D scene and manipulating them accordingly. To this end, we propose a novel framework for the disentanglement and manipulation of objects in a 3D scene. Given a small set of 2D masks delineating the desired foreground object together with the associated 2D views and poses, and no other 3D information, our method produces a volumetric representation of both the foreground object and the background. Our volumetric representation enables separate control of pixel color and depth, as well as scale, rotation, and translation of the foreground object and the background. Using this disentangled representation, we demonstrate a suite of downstream manipulation tasks involving both the foreground and background volumes, going beyond previous work, and including 3D camouflage and 3D semantic text-based manipulation. Fig. 1 illustrates our proposed volumetric disentanglement and a sampling of the downstream volumetric manipulations that this disentanglement enables. We note that while the foreground/background terminology is useful for painting a mental picture, we wish to emphasize that the disentanglement is not limited to foreground objects, and works equally well for objects positioned further back (and partially occluded).
Neural Radiance Fields (NeRF) Mildenhall et al. (2020) delivered a significant breakthrough in the ability to reconstruct complex 3D scenes with high fidelity and a high level of detail. However, NeRF has no control over individual semantic objects within a scene. To this end, ObjectNeRF Yang et al. (2021) proposed to represent foreground objects by rendering rays with masked regions. While ObjectNeRF learns foreground object representation independently from the background, our method
instead disentangles the foreground from the background using a volumetric composition. In particular, the foreground object is extracted using a volumetric “subtraction” of the background from the full scene. In doing so, our method correctly captures reflective objects and those occluded by the background, as well as objects with noisy and inaccurate masks. Further, unlike our method, ObjectNeRF requires additional 3D information in the form of 3D bounding boxes to render the background and edit objects at test time and relies on an accurate estimation of depth for training.
Given a set of 2D training views and poses of a scene, as well as masks, specifying the foreground object, our method first trains a neural radiance field to reconstruct the background and its associated effects, following a similar procedure to NeRF Mildenhall et al. (2020). Due to the prior induced through volumetric rendering, the resulting neural field captures the background volume that also includes objects appearing behind or occluding the foreground object, and captures associated effects such as illumination and reflections. By training a neural radiance field to reconstruct the volume of the entire 3D scene and the volume of the background separately, the representation of the foreground can be computed in a compositional manner from the two volumes Drebin et al. (1988) as illustrated in Fig. 1, without using any other 3D information. We note that the background and foreground can be rendered from both training and novel views.
Having disentangled the foreground object from the rest of the 3D scene, we can now perform a range of downstream tasks, going beyond the placement and movement of objects, as shown in Yang et al. (2021). For example, optical-see-through devices can only add light to the scene, meaning that the generation must be non-negative with respect to the input scene Luo et al. (2021). In other cases, one may wish to keep the depth of the original scene intact Owens et al. (2014); Guo et al. (2022), and only modify the textures or colors of objects. Our framework enables properties such as color, depth, and affine transformations of both the foreground object and background to be manipulated separately, and therefore can handle such manipulation tasks.
Lastly, we consider the ability to affect semantic manipulations to the foreground. To this end, we consider the recently proposed multi-modal embedding of CLIP Radford et al. (2021). Using CLIP, we are able to manipulate the foreground object semantically using text. Recent work such as Michel et al. (2021); Wang et al. (2021a); Sanghi et al. (2021b) considered the ability to manipulate 3D scenes semantically using text. We demonstrate a similar capability, but one which transcends to individual objects in our 3D scene, while adhering to the semantics of the background. We also note that while 2D counterparts may exist for each of the proposed manipulations, our disentangled volumetric manipulation offers 3D-consistent semantic manipulation of foreground objects.
2 RELATED WORK
3D Disentanglement We focus on the disentanglement of semantic and geometric properties in 3D scenes. For a more comprehensive overview, see Ahmed et al. (2018). CLIP-NeRF Wang et al. (2021a) disentangle the shape and appearance of NeRF Mildenhall et al. (2020) and, subsequently, uses CLIP Radford et al. (2021) to manipulate these properties. Other works disentangle pose Wang
et al. (2021b); Yen-Chen et al. (2021), illumination Srinivasan et al. (2021); Boss et al. (2021), texture and shape Liu et al. (2021); Jang & de Agapito (2021); Deng et al. (2020); Noguchi et al. (2021). These works are limited to an entire volumetric scene or object but not to objects within a scene. Further, they are limited to specific categories on constrained domains (e.g human parts).
Another line of work considers the disentanglement of objects in a full 3D scene. Niemeyer & Geiger (2021); Nguyen-Phuoc et al. (2020) consider the generation of scenes in a compositional manner. In contrast, we disentangle an existing scene into the foreground and background volumes, while they generate such volumes from scratch. A subsequent line of works considers the disentanglement of objects in an existing scene. Several representations can be used to learn 3D scenes such as point clouds Shu et al. (2019); Yang et al. (2019); Hui et al. (2020); Achlioptas et al. (2017), meshes Hanocka et al. (2019); Groueix et al. (2018); Wang et al. (2018); Pan et al. (2019), or voxels Riegler et al. (2017); Xie et al. (2019); Wu et al. (2016); Brock et al. (2016). However, work using these representations for disentanglement Broadhurst et al. (2001); Kutulakos & Seitz (2000) are typically restricted in topology or resolution or make strong assumptions about scenes.
Recently, a number of methods proposed to use neural fields (NeRFs) to represent individual objects in the scene. Guo et al. (2020) use an object library and learn a per object scattering field which can then be composed together to represent a scene where the object’s movement, lighting, and reflection can be controlled. Our method instead decomposes an existing scene into the foreground and background objects, capturing their relations, and subsequently allowing for object-specific edits. Ost et al. (2021) use a scene graph representation to decompose dynamic objects, but rely on a dynamic scene as input, and are restricted to only one class of objects with similar shapes. Fu et al. (2022); Kundu et al. (2022) consider specific types of semantic categories, for instance by considering a specialized domain s.a traffic scenes. Unlike these works, our work is not limited to the type of editable objects in the scene and enables a wider variety of manipulations including 3D object camouflage and 3D semantic manipulation of individual objects in a scene. Recently, and concurrently to our work, Kobayashi et al. (2022) proposed a disentanglement framework for neural fields using text or image patches. While it enables the disentanglement of coarse concepts based on text or image patch, it does not allow for the fine-grained control which a mask can provide in selecting the object to be disentangled.
Perhaps most similar to our work is ObjectNeRF Yang et al. (2021). ObjectNeRF uses an object branch to render rays with masked regions for foreground objects. At test time, it uses 3D bounding boxes of individual objects to edit their movement and placement. Similarly to ObjectNeRF, our method inherits NeRF’s ability to produce novel views for both foreground and background objects. However, our method differs from ObjectNeRF in multiple ways: (i). Our method requires input 2D segmentation masks for input training views and does not require 3D bounding boxes for editing foreground objects. Similarly, no 3D structure in the form of a voxel grid is required during training. (ii). Unlike ObjectNeRF, our method correctly captures objects with noisy and inaccurate masks as well as reflective objects and those occluded by the background. (iii). Our method relies on ground truth RGB images for existing views for our loss objectives, and does not require an occlusion loss which requires an accurate estimate of the scene’s depth of existing and novel views. (iv). Lastly, our method goes beyond the editing of objects’ movement and placement and enables zero-shot manipulations (does not require any 3D or 2D training data) such as 3D object camouflage, and 3D text-based semantic manipulation of individual objects.
3D Manipulation Our framework enables the manipulation of localized regions in a scene. While 2D counterparts, such as 2D inpainting approaches exist Guillemot & Le Meur (2013); Yu et al. (2019); Efros & Leung (1999); Efros & Freeman (2001), they cannot generate 3D consistent manipulations. One set of approaches consider editing the entire scene. Canfes et al. (2022) considers texture and shape manipulation of 3D meshes. CLIP-Forge Sanghi et al. (2021a) generates objects matching a text prompt using CLIP embeddings. Text2Mesh Michel et al. (2021) manipulate the texture or style of an object. DreamFields Jain et al. (2021a) learn a neural radiance field representing 3D objects from scratch. Unlike these works, our work is concerned with manipulating a local region in an existing scene. Jang & de Agapito (2021) and Liu et al. (2021) modify the shape and color code of objects using coarse 2D user scribbles, but require a curated dataset of objects under different colors and views, and are limited to synthetic objects. In contrast, our method enables the manipulation of objects in complex scenes, semantically, according to a target text prompt.
3 METHOD
Given a 3D scene, we wish to disentangle semantic objects from the rest of the scene. First, we describe the 3D volumetric representation used to disentangle objects and control objects separately (Sec. 3.1). The disentanglement of foreground and background volumes opens a wide range of downstream applications. We provide a framework that explores some of these applications by manipulating objects in a semantic manner (Sec. 3.2). An illustration of our framework is provided in Fig. 2. Additional training and implementation details are provided in Appendix A.
3.1 DISENTANGLED OBJECT REPRESENTATION
The ability to disentangle the foreground object volumetrically from the background requires a volumetric representation that correctly handles multiple challenges: (i). Foreground occluding objects, which may be covered by a foreground mask, should not be included in the foreground volume, (ii). Regions occluded by the foreground object should be visible in the background volume, (iii). Illumination and reflectance effects, affecting the foreground object in the full scene volume, should affect the now unoccluded regions of the background in a natural way. To this end, we build upon the representation of neural radiance fields Mildenhall et al. (2020).
Neural Radiance Fields. A neural radiance field Mildenhall et al. (2020) is a continuous function f whose input is a 3D position p = (x, y, z) ∈ R3 along with a viewing direction d = (θ, ϕ) ∈ S2, indicating a position along a camera ray. The output of f is an RGB color c ∈ R3 and volume density α ∈ R+. We first apply a frequency-based encoding γ to correctly capture high-frequency details using γ (p) = [cos (2πBp) , sin (2πBp)]T, where B ∈ Rn×3 is a randomly drawn Gaussian matrix whose entries are drawn from N ( 0, σ2 ) , where σ is a hyperparameter. f is then parameterized as
an MLP fθ whose input is (γ(p), γ(d)) and output is c and σ.
Object Representation. Given camera extrinsics ξ, we assume a set {(cir, σir)}Ni=1 of color and volume density values predicted by fθ for N randomly chosen points along camera ray r. A rendering operator then maps these values to an RGB color cr as follows:
cr = N∑ i=1 wir · cir wir = i∏ j=1 (1− σrj ) · T ir T ir = 1− exp(σir · δri ) (1)
where σir and T i r are the alpha and transmittance values for point i along ray r and δ r i = t i+1 − ti is the distance between adjacent samples. For training, we assume a set of posed views {xi}Mi=1 together with their associated foreground object masks {mi}Mi=1. We set {x̂i}Mi=1 to be the corresponding colors to {xi}Mi=1 as predicted by Eq. (1). Fig. 3 gives an overview of the training. To train the background (resp. full) volume, we minimize the masked (resp. unmasked) reconstruction loss
between real and estimated views:
Lbg = M∑ i=1 ||(1−mi)⊙ (xi − x̂i)||22 Lfull = M∑ i=1 ||xi − x̂i||22 (2)
Let wirbg and c ir bg be the value of w i r and c i r in Eq. (1) predicted for the background volume and similarly let wirfull and c ir full be the value of w i r and c i r in Eq. (1) predicted for the full volume. A natural representation of the foreground object can then be found using the principle of volume mixing Drebin et al. (1988):
cfgr = N∑ i=1 wirfg · cirfg wirfg = wirfull − wirbg cirfg = cirfull − cirbg (3)
cfgr is the foreground volume color at the pixel corresponding to ray r. Eq. (3) renders the color of the foreground object for all pixels across different views.
Object Controls. We note that camera parameters, as well as chosen poses, rays, and sampled points along the rays, are chosen to be identical for both the full volume and the background volume, and hence also identical to the foreground volume. Given this canonical setting, the corresponding points along the rays for both the foreground and background can be easily found.
Due to the above-mentioned correspondence, one can independently modify wfgir and cfgir to get w′fg ir and c′fg ir for the foreground volume as well as wbgir and cbgir to get w′bg ir and c′bg ir for the background volume. In order to recombine the modified background with the modified foreground, we note that every 3D point along the ray should only be colored, either according to the background volume or according to the foreground volumes, but not by both, as they are disentangled. We can then recombine the modified foreground and background:
ccr = N∑ i=1 w′bg ir · c′bg ir + w′fg ir · c′fg ir (4)
ccr is the recombined color of the pixel corresponding to ray r. In our experiments, we only modify the foreground and so w′bg ir = wbg ir , c′bg ir = cbg ir .
3.2 OBJECT MANIPULATION
Given the ability to control the foreground and background volumes separately, we now propose a set of downstream manipulation tasks that emerge from our disentangled representation. As noted in Sec. 3.1, we can now control the weights, colors as well as translation parameters separately for the foreground and background volumes and so introduce a set of manipulation tasks that use the controls. We note that the task of Object Removal is equivalent to displaying the background.
Object Transformation. Due to the alignment of camera parameters, as well as chosen poses, rays, and sampled points along the rays, one can apply a transformation on the background and foreground volumes separately, before recombining the volumes together. For either the foreground or the background, and for a given transformation T , we simply evaluate the color and weight of point p using fθ at position T−1(p) and then recombine the volumes together using Eq. (4).
Object Camouflage. Here we wish to change the texture of the foreground 3D object such that it is difficult to detect from its background Owens et al. (2014); Guo et al. (2022). Such an ability can be useful in the context of diminished reality Mori et al. (2017). To do so, we fix the depth of the foreground object while manipulating its texture. As the depth of the foreground is derived from wfgir , we fix w′fg ir = wfg ir and only optimize c′fg
ir . We follow Eq. (4), in compositing the foreground and background volumes. Let the resulting output for each view i be x̂ci , and let x̂ bg i be the corresponding output for the background volume. We optimize a neural radiance field for foreground colors c′fg ir to minimize Lcamouflage = ∑M i=1 ||x̂ci − x̂ bg i ||22. As depth is fixed, only the foreground object colors are changed to match the background volume as closely.
Non-negative 3D Inpainting. Next, we consider the setting of non-negative image generation Luo et al. (2021). We are interested in performing non-negative changes to views of the full scene so as to most closely resemble the background. This constraint is imposed in optical-see-through devices that can only add light onto an image. In this case, we learn a residual volume to render views x̂residuali as in Eq. (1) to minimize Lnon−negative = ∑M i=1 ||x̂ full i + x̂ residual i − x̂bgi ||22. where x̂fulli are rendered views of the full scene as in Eq. (2). That is, we learn a residual volume whose views are x̂residuali , such that when added to the full volume views, most closely resemble the background.
Semantic Manipulation. Next, we consider a mechanism for the semantic manipulation of the foreground. We consider the recently proposed model of CLIP Radford et al. (2021), which can be used to embed an image I and text prompt t (or image I2), and to subsequently compare the cosine similarity of the embeddings. Let this operation be sim(I, t) (resp. sim(I, I2)), where a value of 1 indicates perceptually similar text (resp. image) and image. Let x̂ci be the result of applying Eq. (4), while fixing the background colors and weights as well as the foreground weights. That is, we only optimize the foreground colors c′fg ir . For a user-specified target text t, we consider the objective:
Lsemantic = M∑ i=1 1− sim ( x̂ci ⊙mi + x̂bgi ⊙ (1−mi), t ) (5)
+ 1− sim ( x̂ci ⊙mi + x̂bgi ⊙ (1−mi), x̂bgi ⊙ (1−mi) ) (6)
+ ||x̂ci ⊙ (1−mi), x̂bgi ⊙ (1−mi)||22 (7)
We note that while only the colors of the foreground volume can be manipulated, we enforce that such changes only occur within the localized masked region of the foreground, and so take the background from the fixed background volume. To do so, instead of applying clip similarity directly with x̂ci , we apply it with x̂ c i ⊙ mi + x̂bgi ⊙ (1 − mi). Therefore, CLIP’s similarity can only be improved by making local changes that occur within the masked region of the foreground object, but can ’see’ the background as well as the foreground for context. We enforce the generated volume views are similar to both the target text (Eq. (5)) and the background (Eq. (6)). To further enforce that no changes are made to the background, we constrain the background of the combined volume views to match those of the background using Eq. (7).
4 EXPERIMENTS
We divide the experimental section into two parts. First, we consider the ability to successfully disentangle the foreground and background volumes from the rest of the scene. Second, we demonstrate some of the many manipulation tasks this disentanglement enables, as described in Sec. 3.2. Corresponding 3D scenes from multiple existing and novel views are provided as a supplementary.
4.1 OBJECT DISENTANGLEMENT
Fig. 4 shows views from different scenes of the LLFF dataset Mildenhall et al. (2019), where we separate the full scene, background, and foreground in a volumetrically and semantically consistent manner. We compare our method to ObjectNeRF Yang et al. (2021). We note that ObjectNeRF requires 3D bounding boxes to extract the background volume which we do not use. Hence, we consider only the extracted foreground by ObjectNeRF. As can be seen, ObjectNeRF’s extracted foreground object captures much of the background as well. This is most visible for the orchids and tree trunk examples (second and third rows). Further, for the TV extraction (fourth row), our object representation isolates the reflections on the TV from the background (hence the TV appears black), while ObjectNeRF considers those reflections as part of the object representation. Our reflectionindependent object representation allows us to resize the TV in a manner that correctly adheres to those reflections as those resulting from background light sources. This can be seen in Fig. 6. As a further comparison we consider a neural field trained to reconstruct only the masked region. Due to some noisy masks, shown in the appendix, this results in a noisy result which captures much of the background.
Fig. 5 depicts the consistency of the removal of a leaf, a T-rex, and a whiteboard for two different views. The background neural radiance field makes plausible predictions of the background scene via multi-view geometry and based on the correlated effects from the scene. E.g. the background behind the leaf or the legs of the T-rex might be occluded by the 2D mask from one view, but visible from another. However, the background behind the whiteboard is occluded from every angle. Nevertheless, the background neural radiance field makes a plausible prediction of the background based on the correlated effects from the surrounding scene. Further, our model can handle the disentanglement of non-planar objects, such as the T-rex, well.
In the 2D domain, as far as we can ascertain, the closest 2D task to object disentanglement is that of object inpainting. We consider two prominent baselines of DeepFill-v2 Yu et al. (2019) and EdgeConnect Nazeri et al. (2019) for this task and compare our method on the scenes of leaves and whiteboard removal as in Fig. 5. We train the baseline on the same training images and their associated masks. In order to compare our method on the same novel views, we train a NeRF Mildenhall et al. (2020) on the resulting outputs, resulting in a scene with the same novel views as ours. Unlike our method, the results have 3D inconsistencies, artifacts and flickering between views. The visual comparison is provided in the supplementary.
To assess our method numerically, we conduct a user study and ask users to rate from a scale of 1 − 5: (Q1) “How well was the object removed/extracted?” and (Q2) “How realistic is the resulting object/scene?” We consider 25 users and mean opinion scores are shown in Tab. 1. For object
extraction we consider ObjectNeRF Yang et al. (2021) and consider the scenes in Fig. 4, For object removal, we consider 2D baselines of DeepFill-v2 Yu et al. (2019), EdgeConnect Nazeri et al. (2019), as detailed above, and consider the leaves and whiteboard scenes as in Fig. 5. As no 3D bounding box is provided, we did not consider ObjectNeRF Yang et al. (2021) for object removal.
4.2 OBJECT MANIPULATION
Foreground Transformation. We consider the ability to scale the foreground object and place the rescaled object back into the scene by changing the focal length used to generate the rays of the foreground object, and then volumetrically adding it back into our background volume. Fig. 6 shows an example where the disentangled TV is twice as large. We note that other transformations such as translation and rotation are possible in a similar manner. Fig. 6 highlights several properties of our volumetric disentanglement volume. First, the network is able to “hallucinate” how a plausible background looks in regions occluded across all views (e.g. behind the TV). It does this based on correlated effects from the rest of the scene. A second property is that it can disentanglement correlated effects such as the reflections on the TV screen, which is evident from the almost completely black TV in the foreground scene. Lastly, these correlated effects result in consistent and photo-realistic reflections, when we place the rescaled TV back into the scene.
Object Camouflage. Another manipulation is of camouflaging an object Owens et al. (2014); Guo et al. (2022), i.e. only changing the texture of the object and not its shape. Fig. 7 illustrates examples of camouflaging with fixed depth, but free texture changes. While the depth of the camouflaged object and that of the foreground object match, the appearance is that of the background.
Non-Negative Inpainting. In optical see-through AR, one might also wish to camouflage objects Luo et al. (2021) or inpaint them. However, in see-through AR one can only add light. Fig. 8 shows how adding light can make the appearance of camouflage in a 3D consistent manner.
3D Object Manipulation. We now consider 3D object manipulation. Fig. 9 shows two views of a fern. We have disentangled both the window mullion in the upper left corner and the tree trunk from the rest of the scene. Even though the window mullion is occluded in the first view, and thus our 2D mask is masking the occluding leaf in front of the window mullion, this occluding object is not part of the disentangled window mullion object. The 3D manipulations are shown in (c)-(e) in Fig. 9. For the strawberry manipulation in (e), note how part of the tree trunk was camouflaged to more closely resemble the shape of a strawberry. We compare to 2D text-based inpainting methods of GLIDE Nichol et al. (2021) and Blended Diffusion Avrahami et al. (2021), where we follow the same procedure as in Sec. 4.1. We consider a similar user study as detailed in Sec. 4.1, where Q1 is modified to: “How well was the object semantically manipulated according to the target text prompt?” and consider the fern scene of Fig. 9, for the text prompts of “strawberry” and “old tree”.
4.3 DISCUSSION AND LIMITATIONS
Our work has some limitations. When light from the background affects the foreground object, we correctly disentangle the illuminations on the object. However, when the object is a light source, we cannot completely disentangle the object as seen in Appendix Fig. 10 (a) and in the supplementary. Another limitation is with respect to the semantic manipulation of foreground objects. We found that manipulating too large objects results in an under-constrained optimization because the signal provided by CLIP is not sufficient. We also note that, while our work can handle noisy masks, we require masks for all training views. We leave the task of reducing the number of masks for future work. Further, artifacts may arise when the training views do not provide sufficient information to generate the foreground using correlated background effects of multi-view geometry. Our work is orthogonal to recent speed and generalization extensions of NeRF that could be combined with our method. The number of 2D masks required by our method is also upper bounded by the number of training views and so methods such as DietNeRF Jain et al. (2021b) and SinNeRF Xu et al. (2022) could be combined with our method to reduce the number of 2D masks required to even a single mask. Alternatively one can use zero-shot segmentation approaches such as Shin et al. (2022) to obtain masks in a zero-shot manner. In Appendix B, we consider, for the task of foreground object translation (Fig. 6), alternatives to the recombining method of Eq. (4). Lastly, we note that our method can handle noisy annotations of the foreground. In Appendix Fig. 10(c), we demonstrate the masks used for the leaves scene, which were extracted using an off-the-shelf segmentation algorithm.
5 CONCLUSION
In this work, we presented a framework for the volumetric disentanglement of foreground objects from a background scene. The disentangled foreground object is obtained by volumetrically subtracting a learned volume representation of the background with one from the entire scene. The foreground-background disentanglement adheres to object occlusions and background effects such as illumination and reflections. We established that our disentanglement facilitates separate control of color, depth, and transformations for both the foreground and background objects. This enables a wide range of applications going beyond object movement and placement, of which we have demonstrated those of object camouflage, non-negative generation, and 3D object manipulation.
B ADDITIONAL VISUALIZATIONS
As noted in the main text, Fig. 10 (a) shows the failure to remove a light source. In Fig. 10 (b1 to b4), we show, for the task of foreground object translation (Fig. 6), alternatives to the recombining method of Eq. (4), with (a2) c′full ir instead of c′fg ir , (a3) w′full ir instead of w′fg ir , (a4) ccr =
∑N i=1(w ′ bg ir +w′fg ir ) · (c′bg ir + c′fg ir ). Fig. 10 (c) shows examples of the noisy masks used for the leaf scene disentanglement which our method handles correctly.
C TRAINING MASKS
We provide a sample of the training masks used for training views in Fig. 11. | 1. What is the focus and contribution of the paper on NeRF extension?
2. What are the strengths of the proposed approach, particularly in terms of its ability to separate foreground from background?
3. What are the weaknesses of the paper, especially regarding the input method and its limitations?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
This paper extends on the powerful NeRF and introduces a method to separate foreground from backgrounds. The paper has a similar structure as ObjectNeRF. However, unlike ObjectNeRF that takes a bounding box to identify the foreground object, the authors propose to use annotations in the 2D images to identify the foreground object. The proposed method has more accurate control of the NeRF field in particular when the foreground object has complicated shapes or reflection patterns, as demonstrated in the experimental results.
Strengths And Weaknesses
It is a novel idea and the authors executed it well
The advantage over ObjectNeRF is justified. The authors show various examples in which their method has a cleaner separation than ObjectNeRF
The paper is well written and easy to follow
It is arguable if 2D masks are a better input than 3D bounding boxes. Despite all the advantages shown in the paper, the major drawback of 2D masks is that it takes a lot more efforts to collect (rather than drawing a simple 3D bounding box). Thus in many AR applications, 3D bounding boxes can be preferred than 2D masks
The application is limited to one foreground object. It would be interesting to see if the proposed method can be extended to multiple foreground objects simultaneously and maintain similar accuracy
Clarity, Quality, Novelty And Reproducibility
The paper is well written and easy to follow. The idea is neat, in a good way. I think a qualified graduate student can reproduce the work. |
ICLR | Title
Volumetric Disentanglement for 3D Scene Manipulation
Abstract
Recently, advances in differential volumetric rendering enabled significant breakthroughs in the photo-realistic and fine-detailed reconstruction of complex 3D scenes, which is key for many virtual reality applications. However, in the context of augmented reality, one may also wish to effect semantic manipulations or augmentations of objects within a scene. To this end, we propose a volumetric framework for (i) disentangling or separating, the volumetric representation of a given foreground object from the background, and (ii) semantically manipulating the foreground object, as well as the background. Our framework takes as input a set of 2D masks specifying the desired foreground object for training views, together with the associated 2D views and poses, and produces a foreground-background disentanglement that respects the surrounding illumination, reflections, and partial occlusions, which can be applied to both training and novel views. Unlike previous work, our method does not rely on 3D information in the form of 3D object bounding boxes or a scene voxel grid. It correctly captures reflective foreground objects, objects occluded by the background, and objects with noisy and inaccurate masks. Our method enables the separate control of pixel color and depth as well as 3D similarity transformations of both the foreground and background objects. We subsequently demonstrate our framework’s applicability on several downstream manipulation tasks, going beyond the placement and movement of foreground objects. These tasks include object camouflage, non-negative 3D object inpainting, 3D object translation, 3D object inpainting, and 3D text-based object manipulation.
1 INTRODUCTION
The ability to interact with a 3D environment is of fundamental importance for many augmented reality (AR) application domains such as interactive visualization, entertainment, games, and robotics Mekni & Lemieux (2014). Such interactions are often semantic in nature, capturing specified entities in a 3D scene and manipulating them accordingly. To this end, we propose a novel framework for the disentanglement and manipulation of objects in a 3D scene. Given a small set of 2D masks delineating the desired foreground object together with the associated 2D views and poses, and no other 3D information, our method produces a volumetric representation of both the foreground object and the background. Our volumetric representation enables separate control of pixel color and depth, as well as scale, rotation, and translation of the foreground object and the background. Using this disentangled representation, we demonstrate a suite of downstream manipulation tasks involving both the foreground and background volumes, going beyond previous work, and including 3D camouflage and 3D semantic text-based manipulation. Fig. 1 illustrates our proposed volumetric disentanglement and a sampling of the downstream volumetric manipulations that this disentanglement enables. We note that while the foreground/background terminology is useful for painting a mental picture, we wish to emphasize that the disentanglement is not limited to foreground objects, and works equally well for objects positioned further back (and partially occluded).
Neural Radiance Fields (NeRF) Mildenhall et al. (2020) delivered a significant breakthrough in the ability to reconstruct complex 3D scenes with high fidelity and a high level of detail. However, NeRF has no control over individual semantic objects within a scene. To this end, ObjectNeRF Yang et al. (2021) proposed to represent foreground objects by rendering rays with masked regions. While ObjectNeRF learns foreground object representation independently from the background, our method
instead disentangles the foreground from the background using a volumetric composition. In particular, the foreground object is extracted using a volumetric “subtraction” of the background from the full scene. In doing so, our method correctly captures reflective objects and those occluded by the background, as well as objects with noisy and inaccurate masks. Further, unlike our method, ObjectNeRF requires additional 3D information in the form of 3D bounding boxes to render the background and edit objects at test time and relies on an accurate estimation of depth for training.
Given a set of 2D training views and poses of a scene, as well as masks, specifying the foreground object, our method first trains a neural radiance field to reconstruct the background and its associated effects, following a similar procedure to NeRF Mildenhall et al. (2020). Due to the prior induced through volumetric rendering, the resulting neural field captures the background volume that also includes objects appearing behind or occluding the foreground object, and captures associated effects such as illumination and reflections. By training a neural radiance field to reconstruct the volume of the entire 3D scene and the volume of the background separately, the representation of the foreground can be computed in a compositional manner from the two volumes Drebin et al. (1988) as illustrated in Fig. 1, without using any other 3D information. We note that the background and foreground can be rendered from both training and novel views.
Having disentangled the foreground object from the rest of the 3D scene, we can now perform a range of downstream tasks, going beyond the placement and movement of objects, as shown in Yang et al. (2021). For example, optical-see-through devices can only add light to the scene, meaning that the generation must be non-negative with respect to the input scene Luo et al. (2021). In other cases, one may wish to keep the depth of the original scene intact Owens et al. (2014); Guo et al. (2022), and only modify the textures or colors of objects. Our framework enables properties such as color, depth, and affine transformations of both the foreground object and background to be manipulated separately, and therefore can handle such manipulation tasks.
Lastly, we consider the ability to affect semantic manipulations to the foreground. To this end, we consider the recently proposed multi-modal embedding of CLIP Radford et al. (2021). Using CLIP, we are able to manipulate the foreground object semantically using text. Recent work such as Michel et al. (2021); Wang et al. (2021a); Sanghi et al. (2021b) considered the ability to manipulate 3D scenes semantically using text. We demonstrate a similar capability, but one which transcends to individual objects in our 3D scene, while adhering to the semantics of the background. We also note that while 2D counterparts may exist for each of the proposed manipulations, our disentangled volumetric manipulation offers 3D-consistent semantic manipulation of foreground objects.
2 RELATED WORK
3D Disentanglement We focus on the disentanglement of semantic and geometric properties in 3D scenes. For a more comprehensive overview, see Ahmed et al. (2018). CLIP-NeRF Wang et al. (2021a) disentangle the shape and appearance of NeRF Mildenhall et al. (2020) and, subsequently, uses CLIP Radford et al. (2021) to manipulate these properties. Other works disentangle pose Wang
et al. (2021b); Yen-Chen et al. (2021), illumination Srinivasan et al. (2021); Boss et al. (2021), texture and shape Liu et al. (2021); Jang & de Agapito (2021); Deng et al. (2020); Noguchi et al. (2021). These works are limited to an entire volumetric scene or object but not to objects within a scene. Further, they are limited to specific categories on constrained domains (e.g human parts).
Another line of work considers the disentanglement of objects in a full 3D scene. Niemeyer & Geiger (2021); Nguyen-Phuoc et al. (2020) consider the generation of scenes in a compositional manner. In contrast, we disentangle an existing scene into the foreground and background volumes, while they generate such volumes from scratch. A subsequent line of works considers the disentanglement of objects in an existing scene. Several representations can be used to learn 3D scenes such as point clouds Shu et al. (2019); Yang et al. (2019); Hui et al. (2020); Achlioptas et al. (2017), meshes Hanocka et al. (2019); Groueix et al. (2018); Wang et al. (2018); Pan et al. (2019), or voxels Riegler et al. (2017); Xie et al. (2019); Wu et al. (2016); Brock et al. (2016). However, work using these representations for disentanglement Broadhurst et al. (2001); Kutulakos & Seitz (2000) are typically restricted in topology or resolution or make strong assumptions about scenes.
Recently, a number of methods proposed to use neural fields (NeRFs) to represent individual objects in the scene. Guo et al. (2020) use an object library and learn a per object scattering field which can then be composed together to represent a scene where the object’s movement, lighting, and reflection can be controlled. Our method instead decomposes an existing scene into the foreground and background objects, capturing their relations, and subsequently allowing for object-specific edits. Ost et al. (2021) use a scene graph representation to decompose dynamic objects, but rely on a dynamic scene as input, and are restricted to only one class of objects with similar shapes. Fu et al. (2022); Kundu et al. (2022) consider specific types of semantic categories, for instance by considering a specialized domain s.a traffic scenes. Unlike these works, our work is not limited to the type of editable objects in the scene and enables a wider variety of manipulations including 3D object camouflage and 3D semantic manipulation of individual objects in a scene. Recently, and concurrently to our work, Kobayashi et al. (2022) proposed a disentanglement framework for neural fields using text or image patches. While it enables the disentanglement of coarse concepts based on text or image patch, it does not allow for the fine-grained control which a mask can provide in selecting the object to be disentangled.
Perhaps most similar to our work is ObjectNeRF Yang et al. (2021). ObjectNeRF uses an object branch to render rays with masked regions for foreground objects. At test time, it uses 3D bounding boxes of individual objects to edit their movement and placement. Similarly to ObjectNeRF, our method inherits NeRF’s ability to produce novel views for both foreground and background objects. However, our method differs from ObjectNeRF in multiple ways: (i). Our method requires input 2D segmentation masks for input training views and does not require 3D bounding boxes for editing foreground objects. Similarly, no 3D structure in the form of a voxel grid is required during training. (ii). Unlike ObjectNeRF, our method correctly captures objects with noisy and inaccurate masks as well as reflective objects and those occluded by the background. (iii). Our method relies on ground truth RGB images for existing views for our loss objectives, and does not require an occlusion loss which requires an accurate estimate of the scene’s depth of existing and novel views. (iv). Lastly, our method goes beyond the editing of objects’ movement and placement and enables zero-shot manipulations (does not require any 3D or 2D training data) such as 3D object camouflage, and 3D text-based semantic manipulation of individual objects.
3D Manipulation Our framework enables the manipulation of localized regions in a scene. While 2D counterparts, such as 2D inpainting approaches exist Guillemot & Le Meur (2013); Yu et al. (2019); Efros & Leung (1999); Efros & Freeman (2001), they cannot generate 3D consistent manipulations. One set of approaches consider editing the entire scene. Canfes et al. (2022) considers texture and shape manipulation of 3D meshes. CLIP-Forge Sanghi et al. (2021a) generates objects matching a text prompt using CLIP embeddings. Text2Mesh Michel et al. (2021) manipulate the texture or style of an object. DreamFields Jain et al. (2021a) learn a neural radiance field representing 3D objects from scratch. Unlike these works, our work is concerned with manipulating a local region in an existing scene. Jang & de Agapito (2021) and Liu et al. (2021) modify the shape and color code of objects using coarse 2D user scribbles, but require a curated dataset of objects under different colors and views, and are limited to synthetic objects. In contrast, our method enables the manipulation of objects in complex scenes, semantically, according to a target text prompt.
3 METHOD
Given a 3D scene, we wish to disentangle semantic objects from the rest of the scene. First, we describe the 3D volumetric representation used to disentangle objects and control objects separately (Sec. 3.1). The disentanglement of foreground and background volumes opens a wide range of downstream applications. We provide a framework that explores some of these applications by manipulating objects in a semantic manner (Sec. 3.2). An illustration of our framework is provided in Fig. 2. Additional training and implementation details are provided in Appendix A.
3.1 DISENTANGLED OBJECT REPRESENTATION
The ability to disentangle the foreground object volumetrically from the background requires a volumetric representation that correctly handles multiple challenges: (i). Foreground occluding objects, which may be covered by a foreground mask, should not be included in the foreground volume, (ii). Regions occluded by the foreground object should be visible in the background volume, (iii). Illumination and reflectance effects, affecting the foreground object in the full scene volume, should affect the now unoccluded regions of the background in a natural way. To this end, we build upon the representation of neural radiance fields Mildenhall et al. (2020).
Neural Radiance Fields. A neural radiance field Mildenhall et al. (2020) is a continuous function f whose input is a 3D position p = (x, y, z) ∈ R3 along with a viewing direction d = (θ, ϕ) ∈ S2, indicating a position along a camera ray. The output of f is an RGB color c ∈ R3 and volume density α ∈ R+. We first apply a frequency-based encoding γ to correctly capture high-frequency details using γ (p) = [cos (2πBp) , sin (2πBp)]T, where B ∈ Rn×3 is a randomly drawn Gaussian matrix whose entries are drawn from N ( 0, σ2 ) , where σ is a hyperparameter. f is then parameterized as
an MLP fθ whose input is (γ(p), γ(d)) and output is c and σ.
Object Representation. Given camera extrinsics ξ, we assume a set {(cir, σir)}Ni=1 of color and volume density values predicted by fθ for N randomly chosen points along camera ray r. A rendering operator then maps these values to an RGB color cr as follows:
cr = N∑ i=1 wir · cir wir = i∏ j=1 (1− σrj ) · T ir T ir = 1− exp(σir · δri ) (1)
where σir and T i r are the alpha and transmittance values for point i along ray r and δ r i = t i+1 − ti is the distance between adjacent samples. For training, we assume a set of posed views {xi}Mi=1 together with their associated foreground object masks {mi}Mi=1. We set {x̂i}Mi=1 to be the corresponding colors to {xi}Mi=1 as predicted by Eq. (1). Fig. 3 gives an overview of the training. To train the background (resp. full) volume, we minimize the masked (resp. unmasked) reconstruction loss
between real and estimated views:
Lbg = M∑ i=1 ||(1−mi)⊙ (xi − x̂i)||22 Lfull = M∑ i=1 ||xi − x̂i||22 (2)
Let wirbg and c ir bg be the value of w i r and c i r in Eq. (1) predicted for the background volume and similarly let wirfull and c ir full be the value of w i r and c i r in Eq. (1) predicted for the full volume. A natural representation of the foreground object can then be found using the principle of volume mixing Drebin et al. (1988):
cfgr = N∑ i=1 wirfg · cirfg wirfg = wirfull − wirbg cirfg = cirfull − cirbg (3)
cfgr is the foreground volume color at the pixel corresponding to ray r. Eq. (3) renders the color of the foreground object for all pixels across different views.
Object Controls. We note that camera parameters, as well as chosen poses, rays, and sampled points along the rays, are chosen to be identical for both the full volume and the background volume, and hence also identical to the foreground volume. Given this canonical setting, the corresponding points along the rays for both the foreground and background can be easily found.
Due to the above-mentioned correspondence, one can independently modify wfgir and cfgir to get w′fg ir and c′fg ir for the foreground volume as well as wbgir and cbgir to get w′bg ir and c′bg ir for the background volume. In order to recombine the modified background with the modified foreground, we note that every 3D point along the ray should only be colored, either according to the background volume or according to the foreground volumes, but not by both, as they are disentangled. We can then recombine the modified foreground and background:
ccr = N∑ i=1 w′bg ir · c′bg ir + w′fg ir · c′fg ir (4)
ccr is the recombined color of the pixel corresponding to ray r. In our experiments, we only modify the foreground and so w′bg ir = wbg ir , c′bg ir = cbg ir .
3.2 OBJECT MANIPULATION
Given the ability to control the foreground and background volumes separately, we now propose a set of downstream manipulation tasks that emerge from our disentangled representation. As noted in Sec. 3.1, we can now control the weights, colors as well as translation parameters separately for the foreground and background volumes and so introduce a set of manipulation tasks that use the controls. We note that the task of Object Removal is equivalent to displaying the background.
Object Transformation. Due to the alignment of camera parameters, as well as chosen poses, rays, and sampled points along the rays, one can apply a transformation on the background and foreground volumes separately, before recombining the volumes together. For either the foreground or the background, and for a given transformation T , we simply evaluate the color and weight of point p using fθ at position T−1(p) and then recombine the volumes together using Eq. (4).
Object Camouflage. Here we wish to change the texture of the foreground 3D object such that it is difficult to detect from its background Owens et al. (2014); Guo et al. (2022). Such an ability can be useful in the context of diminished reality Mori et al. (2017). To do so, we fix the depth of the foreground object while manipulating its texture. As the depth of the foreground is derived from wfgir , we fix w′fg ir = wfg ir and only optimize c′fg
ir . We follow Eq. (4), in compositing the foreground and background volumes. Let the resulting output for each view i be x̂ci , and let x̂ bg i be the corresponding output for the background volume. We optimize a neural radiance field for foreground colors c′fg ir to minimize Lcamouflage = ∑M i=1 ||x̂ci − x̂ bg i ||22. As depth is fixed, only the foreground object colors are changed to match the background volume as closely.
Non-negative 3D Inpainting. Next, we consider the setting of non-negative image generation Luo et al. (2021). We are interested in performing non-negative changes to views of the full scene so as to most closely resemble the background. This constraint is imposed in optical-see-through devices that can only add light onto an image. In this case, we learn a residual volume to render views x̂residuali as in Eq. (1) to minimize Lnon−negative = ∑M i=1 ||x̂ full i + x̂ residual i − x̂bgi ||22. where x̂fulli are rendered views of the full scene as in Eq. (2). That is, we learn a residual volume whose views are x̂residuali , such that when added to the full volume views, most closely resemble the background.
Semantic Manipulation. Next, we consider a mechanism for the semantic manipulation of the foreground. We consider the recently proposed model of CLIP Radford et al. (2021), which can be used to embed an image I and text prompt t (or image I2), and to subsequently compare the cosine similarity of the embeddings. Let this operation be sim(I, t) (resp. sim(I, I2)), where a value of 1 indicates perceptually similar text (resp. image) and image. Let x̂ci be the result of applying Eq. (4), while fixing the background colors and weights as well as the foreground weights. That is, we only optimize the foreground colors c′fg ir . For a user-specified target text t, we consider the objective:
Lsemantic = M∑ i=1 1− sim ( x̂ci ⊙mi + x̂bgi ⊙ (1−mi), t ) (5)
+ 1− sim ( x̂ci ⊙mi + x̂bgi ⊙ (1−mi), x̂bgi ⊙ (1−mi) ) (6)
+ ||x̂ci ⊙ (1−mi), x̂bgi ⊙ (1−mi)||22 (7)
We note that while only the colors of the foreground volume can be manipulated, we enforce that such changes only occur within the localized masked region of the foreground, and so take the background from the fixed background volume. To do so, instead of applying clip similarity directly with x̂ci , we apply it with x̂ c i ⊙ mi + x̂bgi ⊙ (1 − mi). Therefore, CLIP’s similarity can only be improved by making local changes that occur within the masked region of the foreground object, but can ’see’ the background as well as the foreground for context. We enforce the generated volume views are similar to both the target text (Eq. (5)) and the background (Eq. (6)). To further enforce that no changes are made to the background, we constrain the background of the combined volume views to match those of the background using Eq. (7).
4 EXPERIMENTS
We divide the experimental section into two parts. First, we consider the ability to successfully disentangle the foreground and background volumes from the rest of the scene. Second, we demonstrate some of the many manipulation tasks this disentanglement enables, as described in Sec. 3.2. Corresponding 3D scenes from multiple existing and novel views are provided as a supplementary.
4.1 OBJECT DISENTANGLEMENT
Fig. 4 shows views from different scenes of the LLFF dataset Mildenhall et al. (2019), where we separate the full scene, background, and foreground in a volumetrically and semantically consistent manner. We compare our method to ObjectNeRF Yang et al. (2021). We note that ObjectNeRF requires 3D bounding boxes to extract the background volume which we do not use. Hence, we consider only the extracted foreground by ObjectNeRF. As can be seen, ObjectNeRF’s extracted foreground object captures much of the background as well. This is most visible for the orchids and tree trunk examples (second and third rows). Further, for the TV extraction (fourth row), our object representation isolates the reflections on the TV from the background (hence the TV appears black), while ObjectNeRF considers those reflections as part of the object representation. Our reflectionindependent object representation allows us to resize the TV in a manner that correctly adheres to those reflections as those resulting from background light sources. This can be seen in Fig. 6. As a further comparison we consider a neural field trained to reconstruct only the masked region. Due to some noisy masks, shown in the appendix, this results in a noisy result which captures much of the background.
Fig. 5 depicts the consistency of the removal of a leaf, a T-rex, and a whiteboard for two different views. The background neural radiance field makes plausible predictions of the background scene via multi-view geometry and based on the correlated effects from the scene. E.g. the background behind the leaf or the legs of the T-rex might be occluded by the 2D mask from one view, but visible from another. However, the background behind the whiteboard is occluded from every angle. Nevertheless, the background neural radiance field makes a plausible prediction of the background based on the correlated effects from the surrounding scene. Further, our model can handle the disentanglement of non-planar objects, such as the T-rex, well.
In the 2D domain, as far as we can ascertain, the closest 2D task to object disentanglement is that of object inpainting. We consider two prominent baselines of DeepFill-v2 Yu et al. (2019) and EdgeConnect Nazeri et al. (2019) for this task and compare our method on the scenes of leaves and whiteboard removal as in Fig. 5. We train the baseline on the same training images and their associated masks. In order to compare our method on the same novel views, we train a NeRF Mildenhall et al. (2020) on the resulting outputs, resulting in a scene with the same novel views as ours. Unlike our method, the results have 3D inconsistencies, artifacts and flickering between views. The visual comparison is provided in the supplementary.
To assess our method numerically, we conduct a user study and ask users to rate from a scale of 1 − 5: (Q1) “How well was the object removed/extracted?” and (Q2) “How realistic is the resulting object/scene?” We consider 25 users and mean opinion scores are shown in Tab. 1. For object
extraction we consider ObjectNeRF Yang et al. (2021) and consider the scenes in Fig. 4, For object removal, we consider 2D baselines of DeepFill-v2 Yu et al. (2019), EdgeConnect Nazeri et al. (2019), as detailed above, and consider the leaves and whiteboard scenes as in Fig. 5. As no 3D bounding box is provided, we did not consider ObjectNeRF Yang et al. (2021) for object removal.
4.2 OBJECT MANIPULATION
Foreground Transformation. We consider the ability to scale the foreground object and place the rescaled object back into the scene by changing the focal length used to generate the rays of the foreground object, and then volumetrically adding it back into our background volume. Fig. 6 shows an example where the disentangled TV is twice as large. We note that other transformations such as translation and rotation are possible in a similar manner. Fig. 6 highlights several properties of our volumetric disentanglement volume. First, the network is able to “hallucinate” how a plausible background looks in regions occluded across all views (e.g. behind the TV). It does this based on correlated effects from the rest of the scene. A second property is that it can disentanglement correlated effects such as the reflections on the TV screen, which is evident from the almost completely black TV in the foreground scene. Lastly, these correlated effects result in consistent and photo-realistic reflections, when we place the rescaled TV back into the scene.
Object Camouflage. Another manipulation is of camouflaging an object Owens et al. (2014); Guo et al. (2022), i.e. only changing the texture of the object and not its shape. Fig. 7 illustrates examples of camouflaging with fixed depth, but free texture changes. While the depth of the camouflaged object and that of the foreground object match, the appearance is that of the background.
Non-Negative Inpainting. In optical see-through AR, one might also wish to camouflage objects Luo et al. (2021) or inpaint them. However, in see-through AR one can only add light. Fig. 8 shows how adding light can make the appearance of camouflage in a 3D consistent manner.
3D Object Manipulation. We now consider 3D object manipulation. Fig. 9 shows two views of a fern. We have disentangled both the window mullion in the upper left corner and the tree trunk from the rest of the scene. Even though the window mullion is occluded in the first view, and thus our 2D mask is masking the occluding leaf in front of the window mullion, this occluding object is not part of the disentangled window mullion object. The 3D manipulations are shown in (c)-(e) in Fig. 9. For the strawberry manipulation in (e), note how part of the tree trunk was camouflaged to more closely resemble the shape of a strawberry. We compare to 2D text-based inpainting methods of GLIDE Nichol et al. (2021) and Blended Diffusion Avrahami et al. (2021), where we follow the same procedure as in Sec. 4.1. We consider a similar user study as detailed in Sec. 4.1, where Q1 is modified to: “How well was the object semantically manipulated according to the target text prompt?” and consider the fern scene of Fig. 9, for the text prompts of “strawberry” and “old tree”.
4.3 DISCUSSION AND LIMITATIONS
Our work has some limitations. When light from the background affects the foreground object, we correctly disentangle the illuminations on the object. However, when the object is a light source, we cannot completely disentangle the object as seen in Appendix Fig. 10 (a) and in the supplementary. Another limitation is with respect to the semantic manipulation of foreground objects. We found that manipulating too large objects results in an under-constrained optimization because the signal provided by CLIP is not sufficient. We also note that, while our work can handle noisy masks, we require masks for all training views. We leave the task of reducing the number of masks for future work. Further, artifacts may arise when the training views do not provide sufficient information to generate the foreground using correlated background effects of multi-view geometry. Our work is orthogonal to recent speed and generalization extensions of NeRF that could be combined with our method. The number of 2D masks required by our method is also upper bounded by the number of training views and so methods such as DietNeRF Jain et al. (2021b) and SinNeRF Xu et al. (2022) could be combined with our method to reduce the number of 2D masks required to even a single mask. Alternatively one can use zero-shot segmentation approaches such as Shin et al. (2022) to obtain masks in a zero-shot manner. In Appendix B, we consider, for the task of foreground object translation (Fig. 6), alternatives to the recombining method of Eq. (4). Lastly, we note that our method can handle noisy annotations of the foreground. In Appendix Fig. 10(c), we demonstrate the masks used for the leaves scene, which were extracted using an off-the-shelf segmentation algorithm.
5 CONCLUSION
In this work, we presented a framework for the volumetric disentanglement of foreground objects from a background scene. The disentangled foreground object is obtained by volumetrically subtracting a learned volume representation of the background with one from the entire scene. The foreground-background disentanglement adheres to object occlusions and background effects such as illumination and reflections. We established that our disentanglement facilitates separate control of color, depth, and transformations for both the foreground and background objects. This enables a wide range of applications going beyond object movement and placement, of which we have demonstrated those of object camouflage, non-negative generation, and 3D object manipulation.
B ADDITIONAL VISUALIZATIONS
As noted in the main text, Fig. 10 (a) shows the failure to remove a light source. In Fig. 10 (b1 to b4), we show, for the task of foreground object translation (Fig. 6), alternatives to the recombining method of Eq. (4), with (a2) c′full ir instead of c′fg ir , (a3) w′full ir instead of w′fg ir , (a4) ccr =
∑N i=1(w ′ bg ir +w′fg ir ) · (c′bg ir + c′fg ir ). Fig. 10 (c) shows examples of the noisy masks used for the leaf scene disentanglement which our method handles correctly.
C TRAINING MASKS
We provide a sample of the training masks used for training views in Fig. 11. | 1. What is the main contribution of the paper regarding disentangling foreground and background models?
2. What are the strengths and weaknesses of the proposed approach compared to prior works such as Yang et al. 2021?
3. How does the reviewer assess the effectiveness of the method in handling noisy masks, challenging cases, and real-scene evaluations?
4. Are there any concerns or suggestions regarding the novelty and reproducibility of the work? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
The paper proposed to disentangle foreground/background (volumetric) models starting only from a set of images, along with camera parameters and 2D object masks. The main idea is to train two models, a full model and a background model, such that they are consistent with the given image and mask information. The proposed setup can be trained with only 2D masks, without access to 3D masks or object bounding boxes. The decomposition results enable several editing applications.
The approach is a fairly direct adaptation of the NERF formulation with additional masked loss to work with the 2D mask information. A baseline comparison with a simple NERF scene reconstruction followed by extracting foreground/background using the provided image masks would have been interesting. The proposed model will fail to disentangle in presence of (semi) transparent objects and/or inter-object reflections (this is already visible in the some of the examples in the supplementary).
Strengths And Weaknesses
Simple method for disentangling foreground and background NERF models.
Demonstrated on a variety of application scenarios.
Compared with Yang et al. 2021
The method is not that different, in spirit, from Yang et al. 2021. True that they use a slightly different architecture (Figure 2 in their paper) but they already present how a scene and object NERF models can be jointly trained. Btw, in the current setup, one can alternately train an object and background NERF and then use a test-time composition to get a final rendering. I would expect this to produce similar results.
Effect of noisy masks is superficially evaluated. It should be evaluated both on synthetic and real settings, and with varying amount of noise (as one has to provide 2D masks across all images). Evaluation should be qualitative and quantitative. -- Limited evaluation on challenging cases like transparency and reflection/mirror surface (e.g., one can see the residue reflection on the side wall). -- Nice to see the applications but those are secondary contributions of the work. I would have liked to see more evaluation on real scenes. -- Not clear how a post projection of 2D masks on a scene-only Nerf model would compare.
Clarity, Quality, Novelty And Reproducibility
The paper is clearly written and has a good set of supplementary material (applications). The work should be reproducible.
Novelty of the work is limited (wrt Yang et al. 2021) and most of the demonstrated applications. |
ICLR | Title
Volumetric Disentanglement for 3D Scene Manipulation
Abstract
Recently, advances in differential volumetric rendering enabled significant breakthroughs in the photo-realistic and fine-detailed reconstruction of complex 3D scenes, which is key for many virtual reality applications. However, in the context of augmented reality, one may also wish to effect semantic manipulations or augmentations of objects within a scene. To this end, we propose a volumetric framework for (i) disentangling or separating, the volumetric representation of a given foreground object from the background, and (ii) semantically manipulating the foreground object, as well as the background. Our framework takes as input a set of 2D masks specifying the desired foreground object for training views, together with the associated 2D views and poses, and produces a foreground-background disentanglement that respects the surrounding illumination, reflections, and partial occlusions, which can be applied to both training and novel views. Unlike previous work, our method does not rely on 3D information in the form of 3D object bounding boxes or a scene voxel grid. It correctly captures reflective foreground objects, objects occluded by the background, and objects with noisy and inaccurate masks. Our method enables the separate control of pixel color and depth as well as 3D similarity transformations of both the foreground and background objects. We subsequently demonstrate our framework’s applicability on several downstream manipulation tasks, going beyond the placement and movement of foreground objects. These tasks include object camouflage, non-negative 3D object inpainting, 3D object translation, 3D object inpainting, and 3D text-based object manipulation.
1 INTRODUCTION
The ability to interact with a 3D environment is of fundamental importance for many augmented reality (AR) application domains such as interactive visualization, entertainment, games, and robotics Mekni & Lemieux (2014). Such interactions are often semantic in nature, capturing specified entities in a 3D scene and manipulating them accordingly. To this end, we propose a novel framework for the disentanglement and manipulation of objects in a 3D scene. Given a small set of 2D masks delineating the desired foreground object together with the associated 2D views and poses, and no other 3D information, our method produces a volumetric representation of both the foreground object and the background. Our volumetric representation enables separate control of pixel color and depth, as well as scale, rotation, and translation of the foreground object and the background. Using this disentangled representation, we demonstrate a suite of downstream manipulation tasks involving both the foreground and background volumes, going beyond previous work, and including 3D camouflage and 3D semantic text-based manipulation. Fig. 1 illustrates our proposed volumetric disentanglement and a sampling of the downstream volumetric manipulations that this disentanglement enables. We note that while the foreground/background terminology is useful for painting a mental picture, we wish to emphasize that the disentanglement is not limited to foreground objects, and works equally well for objects positioned further back (and partially occluded).
Neural Radiance Fields (NeRF) Mildenhall et al. (2020) delivered a significant breakthrough in the ability to reconstruct complex 3D scenes with high fidelity and a high level of detail. However, NeRF has no control over individual semantic objects within a scene. To this end, ObjectNeRF Yang et al. (2021) proposed to represent foreground objects by rendering rays with masked regions. While ObjectNeRF learns foreground object representation independently from the background, our method
instead disentangles the foreground from the background using a volumetric composition. In particular, the foreground object is extracted using a volumetric “subtraction” of the background from the full scene. In doing so, our method correctly captures reflective objects and those occluded by the background, as well as objects with noisy and inaccurate masks. Further, unlike our method, ObjectNeRF requires additional 3D information in the form of 3D bounding boxes to render the background and edit objects at test time and relies on an accurate estimation of depth for training.
Given a set of 2D training views and poses of a scene, as well as masks, specifying the foreground object, our method first trains a neural radiance field to reconstruct the background and its associated effects, following a similar procedure to NeRF Mildenhall et al. (2020). Due to the prior induced through volumetric rendering, the resulting neural field captures the background volume that also includes objects appearing behind or occluding the foreground object, and captures associated effects such as illumination and reflections. By training a neural radiance field to reconstruct the volume of the entire 3D scene and the volume of the background separately, the representation of the foreground can be computed in a compositional manner from the two volumes Drebin et al. (1988) as illustrated in Fig. 1, without using any other 3D information. We note that the background and foreground can be rendered from both training and novel views.
Having disentangled the foreground object from the rest of the 3D scene, we can now perform a range of downstream tasks, going beyond the placement and movement of objects, as shown in Yang et al. (2021). For example, optical-see-through devices can only add light to the scene, meaning that the generation must be non-negative with respect to the input scene Luo et al. (2021). In other cases, one may wish to keep the depth of the original scene intact Owens et al. (2014); Guo et al. (2022), and only modify the textures or colors of objects. Our framework enables properties such as color, depth, and affine transformations of both the foreground object and background to be manipulated separately, and therefore can handle such manipulation tasks.
Lastly, we consider the ability to affect semantic manipulations to the foreground. To this end, we consider the recently proposed multi-modal embedding of CLIP Radford et al. (2021). Using CLIP, we are able to manipulate the foreground object semantically using text. Recent work such as Michel et al. (2021); Wang et al. (2021a); Sanghi et al. (2021b) considered the ability to manipulate 3D scenes semantically using text. We demonstrate a similar capability, but one which transcends to individual objects in our 3D scene, while adhering to the semantics of the background. We also note that while 2D counterparts may exist for each of the proposed manipulations, our disentangled volumetric manipulation offers 3D-consistent semantic manipulation of foreground objects.
2 RELATED WORK
3D Disentanglement We focus on the disentanglement of semantic and geometric properties in 3D scenes. For a more comprehensive overview, see Ahmed et al. (2018). CLIP-NeRF Wang et al. (2021a) disentangle the shape and appearance of NeRF Mildenhall et al. (2020) and, subsequently, uses CLIP Radford et al. (2021) to manipulate these properties. Other works disentangle pose Wang
et al. (2021b); Yen-Chen et al. (2021), illumination Srinivasan et al. (2021); Boss et al. (2021), texture and shape Liu et al. (2021); Jang & de Agapito (2021); Deng et al. (2020); Noguchi et al. (2021). These works are limited to an entire volumetric scene or object but not to objects within a scene. Further, they are limited to specific categories on constrained domains (e.g human parts).
Another line of work considers the disentanglement of objects in a full 3D scene. Niemeyer & Geiger (2021); Nguyen-Phuoc et al. (2020) consider the generation of scenes in a compositional manner. In contrast, we disentangle an existing scene into the foreground and background volumes, while they generate such volumes from scratch. A subsequent line of works considers the disentanglement of objects in an existing scene. Several representations can be used to learn 3D scenes such as point clouds Shu et al. (2019); Yang et al. (2019); Hui et al. (2020); Achlioptas et al. (2017), meshes Hanocka et al. (2019); Groueix et al. (2018); Wang et al. (2018); Pan et al. (2019), or voxels Riegler et al. (2017); Xie et al. (2019); Wu et al. (2016); Brock et al. (2016). However, work using these representations for disentanglement Broadhurst et al. (2001); Kutulakos & Seitz (2000) are typically restricted in topology or resolution or make strong assumptions about scenes.
Recently, a number of methods proposed to use neural fields (NeRFs) to represent individual objects in the scene. Guo et al. (2020) use an object library and learn a per object scattering field which can then be composed together to represent a scene where the object’s movement, lighting, and reflection can be controlled. Our method instead decomposes an existing scene into the foreground and background objects, capturing their relations, and subsequently allowing for object-specific edits. Ost et al. (2021) use a scene graph representation to decompose dynamic objects, but rely on a dynamic scene as input, and are restricted to only one class of objects with similar shapes. Fu et al. (2022); Kundu et al. (2022) consider specific types of semantic categories, for instance by considering a specialized domain s.a traffic scenes. Unlike these works, our work is not limited to the type of editable objects in the scene and enables a wider variety of manipulations including 3D object camouflage and 3D semantic manipulation of individual objects in a scene. Recently, and concurrently to our work, Kobayashi et al. (2022) proposed a disentanglement framework for neural fields using text or image patches. While it enables the disentanglement of coarse concepts based on text or image patch, it does not allow for the fine-grained control which a mask can provide in selecting the object to be disentangled.
Perhaps most similar to our work is ObjectNeRF Yang et al. (2021). ObjectNeRF uses an object branch to render rays with masked regions for foreground objects. At test time, it uses 3D bounding boxes of individual objects to edit their movement and placement. Similarly to ObjectNeRF, our method inherits NeRF’s ability to produce novel views for both foreground and background objects. However, our method differs from ObjectNeRF in multiple ways: (i). Our method requires input 2D segmentation masks for input training views and does not require 3D bounding boxes for editing foreground objects. Similarly, no 3D structure in the form of a voxel grid is required during training. (ii). Unlike ObjectNeRF, our method correctly captures objects with noisy and inaccurate masks as well as reflective objects and those occluded by the background. (iii). Our method relies on ground truth RGB images for existing views for our loss objectives, and does not require an occlusion loss which requires an accurate estimate of the scene’s depth of existing and novel views. (iv). Lastly, our method goes beyond the editing of objects’ movement and placement and enables zero-shot manipulations (does not require any 3D or 2D training data) such as 3D object camouflage, and 3D text-based semantic manipulation of individual objects.
3D Manipulation Our framework enables the manipulation of localized regions in a scene. While 2D counterparts, such as 2D inpainting approaches exist Guillemot & Le Meur (2013); Yu et al. (2019); Efros & Leung (1999); Efros & Freeman (2001), they cannot generate 3D consistent manipulations. One set of approaches consider editing the entire scene. Canfes et al. (2022) considers texture and shape manipulation of 3D meshes. CLIP-Forge Sanghi et al. (2021a) generates objects matching a text prompt using CLIP embeddings. Text2Mesh Michel et al. (2021) manipulate the texture or style of an object. DreamFields Jain et al. (2021a) learn a neural radiance field representing 3D objects from scratch. Unlike these works, our work is concerned with manipulating a local region in an existing scene. Jang & de Agapito (2021) and Liu et al. (2021) modify the shape and color code of objects using coarse 2D user scribbles, but require a curated dataset of objects under different colors and views, and are limited to synthetic objects. In contrast, our method enables the manipulation of objects in complex scenes, semantically, according to a target text prompt.
3 METHOD
Given a 3D scene, we wish to disentangle semantic objects from the rest of the scene. First, we describe the 3D volumetric representation used to disentangle objects and control objects separately (Sec. 3.1). The disentanglement of foreground and background volumes opens a wide range of downstream applications. We provide a framework that explores some of these applications by manipulating objects in a semantic manner (Sec. 3.2). An illustration of our framework is provided in Fig. 2. Additional training and implementation details are provided in Appendix A.
3.1 DISENTANGLED OBJECT REPRESENTATION
The ability to disentangle the foreground object volumetrically from the background requires a volumetric representation that correctly handles multiple challenges: (i). Foreground occluding objects, which may be covered by a foreground mask, should not be included in the foreground volume, (ii). Regions occluded by the foreground object should be visible in the background volume, (iii). Illumination and reflectance effects, affecting the foreground object in the full scene volume, should affect the now unoccluded regions of the background in a natural way. To this end, we build upon the representation of neural radiance fields Mildenhall et al. (2020).
Neural Radiance Fields. A neural radiance field Mildenhall et al. (2020) is a continuous function f whose input is a 3D position p = (x, y, z) ∈ R3 along with a viewing direction d = (θ, ϕ) ∈ S2, indicating a position along a camera ray. The output of f is an RGB color c ∈ R3 and volume density α ∈ R+. We first apply a frequency-based encoding γ to correctly capture high-frequency details using γ (p) = [cos (2πBp) , sin (2πBp)]T, where B ∈ Rn×3 is a randomly drawn Gaussian matrix whose entries are drawn from N ( 0, σ2 ) , where σ is a hyperparameter. f is then parameterized as
an MLP fθ whose input is (γ(p), γ(d)) and output is c and σ.
Object Representation. Given camera extrinsics ξ, we assume a set {(cir, σir)}Ni=1 of color and volume density values predicted by fθ for N randomly chosen points along camera ray r. A rendering operator then maps these values to an RGB color cr as follows:
cr = N∑ i=1 wir · cir wir = i∏ j=1 (1− σrj ) · T ir T ir = 1− exp(σir · δri ) (1)
where σir and T i r are the alpha and transmittance values for point i along ray r and δ r i = t i+1 − ti is the distance between adjacent samples. For training, we assume a set of posed views {xi}Mi=1 together with their associated foreground object masks {mi}Mi=1. We set {x̂i}Mi=1 to be the corresponding colors to {xi}Mi=1 as predicted by Eq. (1). Fig. 3 gives an overview of the training. To train the background (resp. full) volume, we minimize the masked (resp. unmasked) reconstruction loss
between real and estimated views:
Lbg = M∑ i=1 ||(1−mi)⊙ (xi − x̂i)||22 Lfull = M∑ i=1 ||xi − x̂i||22 (2)
Let wirbg and c ir bg be the value of w i r and c i r in Eq. (1) predicted for the background volume and similarly let wirfull and c ir full be the value of w i r and c i r in Eq. (1) predicted for the full volume. A natural representation of the foreground object can then be found using the principle of volume mixing Drebin et al. (1988):
cfgr = N∑ i=1 wirfg · cirfg wirfg = wirfull − wirbg cirfg = cirfull − cirbg (3)
cfgr is the foreground volume color at the pixel corresponding to ray r. Eq. (3) renders the color of the foreground object for all pixels across different views.
Object Controls. We note that camera parameters, as well as chosen poses, rays, and sampled points along the rays, are chosen to be identical for both the full volume and the background volume, and hence also identical to the foreground volume. Given this canonical setting, the corresponding points along the rays for both the foreground and background can be easily found.
Due to the above-mentioned correspondence, one can independently modify wfgir and cfgir to get w′fg ir and c′fg ir for the foreground volume as well as wbgir and cbgir to get w′bg ir and c′bg ir for the background volume. In order to recombine the modified background with the modified foreground, we note that every 3D point along the ray should only be colored, either according to the background volume or according to the foreground volumes, but not by both, as they are disentangled. We can then recombine the modified foreground and background:
ccr = N∑ i=1 w′bg ir · c′bg ir + w′fg ir · c′fg ir (4)
ccr is the recombined color of the pixel corresponding to ray r. In our experiments, we only modify the foreground and so w′bg ir = wbg ir , c′bg ir = cbg ir .
3.2 OBJECT MANIPULATION
Given the ability to control the foreground and background volumes separately, we now propose a set of downstream manipulation tasks that emerge from our disentangled representation. As noted in Sec. 3.1, we can now control the weights, colors as well as translation parameters separately for the foreground and background volumes and so introduce a set of manipulation tasks that use the controls. We note that the task of Object Removal is equivalent to displaying the background.
Object Transformation. Due to the alignment of camera parameters, as well as chosen poses, rays, and sampled points along the rays, one can apply a transformation on the background and foreground volumes separately, before recombining the volumes together. For either the foreground or the background, and for a given transformation T , we simply evaluate the color and weight of point p using fθ at position T−1(p) and then recombine the volumes together using Eq. (4).
Object Camouflage. Here we wish to change the texture of the foreground 3D object such that it is difficult to detect from its background Owens et al. (2014); Guo et al. (2022). Such an ability can be useful in the context of diminished reality Mori et al. (2017). To do so, we fix the depth of the foreground object while manipulating its texture. As the depth of the foreground is derived from wfgir , we fix w′fg ir = wfg ir and only optimize c′fg
ir . We follow Eq. (4), in compositing the foreground and background volumes. Let the resulting output for each view i be x̂ci , and let x̂ bg i be the corresponding output for the background volume. We optimize a neural radiance field for foreground colors c′fg ir to minimize Lcamouflage = ∑M i=1 ||x̂ci − x̂ bg i ||22. As depth is fixed, only the foreground object colors are changed to match the background volume as closely.
Non-negative 3D Inpainting. Next, we consider the setting of non-negative image generation Luo et al. (2021). We are interested in performing non-negative changes to views of the full scene so as to most closely resemble the background. This constraint is imposed in optical-see-through devices that can only add light onto an image. In this case, we learn a residual volume to render views x̂residuali as in Eq. (1) to minimize Lnon−negative = ∑M i=1 ||x̂ full i + x̂ residual i − x̂bgi ||22. where x̂fulli are rendered views of the full scene as in Eq. (2). That is, we learn a residual volume whose views are x̂residuali , such that when added to the full volume views, most closely resemble the background.
Semantic Manipulation. Next, we consider a mechanism for the semantic manipulation of the foreground. We consider the recently proposed model of CLIP Radford et al. (2021), which can be used to embed an image I and text prompt t (or image I2), and to subsequently compare the cosine similarity of the embeddings. Let this operation be sim(I, t) (resp. sim(I, I2)), where a value of 1 indicates perceptually similar text (resp. image) and image. Let x̂ci be the result of applying Eq. (4), while fixing the background colors and weights as well as the foreground weights. That is, we only optimize the foreground colors c′fg ir . For a user-specified target text t, we consider the objective:
Lsemantic = M∑ i=1 1− sim ( x̂ci ⊙mi + x̂bgi ⊙ (1−mi), t ) (5)
+ 1− sim ( x̂ci ⊙mi + x̂bgi ⊙ (1−mi), x̂bgi ⊙ (1−mi) ) (6)
+ ||x̂ci ⊙ (1−mi), x̂bgi ⊙ (1−mi)||22 (7)
We note that while only the colors of the foreground volume can be manipulated, we enforce that such changes only occur within the localized masked region of the foreground, and so take the background from the fixed background volume. To do so, instead of applying clip similarity directly with x̂ci , we apply it with x̂ c i ⊙ mi + x̂bgi ⊙ (1 − mi). Therefore, CLIP’s similarity can only be improved by making local changes that occur within the masked region of the foreground object, but can ’see’ the background as well as the foreground for context. We enforce the generated volume views are similar to both the target text (Eq. (5)) and the background (Eq. (6)). To further enforce that no changes are made to the background, we constrain the background of the combined volume views to match those of the background using Eq. (7).
4 EXPERIMENTS
We divide the experimental section into two parts. First, we consider the ability to successfully disentangle the foreground and background volumes from the rest of the scene. Second, we demonstrate some of the many manipulation tasks this disentanglement enables, as described in Sec. 3.2. Corresponding 3D scenes from multiple existing and novel views are provided as a supplementary.
4.1 OBJECT DISENTANGLEMENT
Fig. 4 shows views from different scenes of the LLFF dataset Mildenhall et al. (2019), where we separate the full scene, background, and foreground in a volumetrically and semantically consistent manner. We compare our method to ObjectNeRF Yang et al. (2021). We note that ObjectNeRF requires 3D bounding boxes to extract the background volume which we do not use. Hence, we consider only the extracted foreground by ObjectNeRF. As can be seen, ObjectNeRF’s extracted foreground object captures much of the background as well. This is most visible for the orchids and tree trunk examples (second and third rows). Further, for the TV extraction (fourth row), our object representation isolates the reflections on the TV from the background (hence the TV appears black), while ObjectNeRF considers those reflections as part of the object representation. Our reflectionindependent object representation allows us to resize the TV in a manner that correctly adheres to those reflections as those resulting from background light sources. This can be seen in Fig. 6. As a further comparison we consider a neural field trained to reconstruct only the masked region. Due to some noisy masks, shown in the appendix, this results in a noisy result which captures much of the background.
Fig. 5 depicts the consistency of the removal of a leaf, a T-rex, and a whiteboard for two different views. The background neural radiance field makes plausible predictions of the background scene via multi-view geometry and based on the correlated effects from the scene. E.g. the background behind the leaf or the legs of the T-rex might be occluded by the 2D mask from one view, but visible from another. However, the background behind the whiteboard is occluded from every angle. Nevertheless, the background neural radiance field makes a plausible prediction of the background based on the correlated effects from the surrounding scene. Further, our model can handle the disentanglement of non-planar objects, such as the T-rex, well.
In the 2D domain, as far as we can ascertain, the closest 2D task to object disentanglement is that of object inpainting. We consider two prominent baselines of DeepFill-v2 Yu et al. (2019) and EdgeConnect Nazeri et al. (2019) for this task and compare our method on the scenes of leaves and whiteboard removal as in Fig. 5. We train the baseline on the same training images and their associated masks. In order to compare our method on the same novel views, we train a NeRF Mildenhall et al. (2020) on the resulting outputs, resulting in a scene with the same novel views as ours. Unlike our method, the results have 3D inconsistencies, artifacts and flickering between views. The visual comparison is provided in the supplementary.
To assess our method numerically, we conduct a user study and ask users to rate from a scale of 1 − 5: (Q1) “How well was the object removed/extracted?” and (Q2) “How realistic is the resulting object/scene?” We consider 25 users and mean opinion scores are shown in Tab. 1. For object
extraction we consider ObjectNeRF Yang et al. (2021) and consider the scenes in Fig. 4, For object removal, we consider 2D baselines of DeepFill-v2 Yu et al. (2019), EdgeConnect Nazeri et al. (2019), as detailed above, and consider the leaves and whiteboard scenes as in Fig. 5. As no 3D bounding box is provided, we did not consider ObjectNeRF Yang et al. (2021) for object removal.
4.2 OBJECT MANIPULATION
Foreground Transformation. We consider the ability to scale the foreground object and place the rescaled object back into the scene by changing the focal length used to generate the rays of the foreground object, and then volumetrically adding it back into our background volume. Fig. 6 shows an example where the disentangled TV is twice as large. We note that other transformations such as translation and rotation are possible in a similar manner. Fig. 6 highlights several properties of our volumetric disentanglement volume. First, the network is able to “hallucinate” how a plausible background looks in regions occluded across all views (e.g. behind the TV). It does this based on correlated effects from the rest of the scene. A second property is that it can disentanglement correlated effects such as the reflections on the TV screen, which is evident from the almost completely black TV in the foreground scene. Lastly, these correlated effects result in consistent and photo-realistic reflections, when we place the rescaled TV back into the scene.
Object Camouflage. Another manipulation is of camouflaging an object Owens et al. (2014); Guo et al. (2022), i.e. only changing the texture of the object and not its shape. Fig. 7 illustrates examples of camouflaging with fixed depth, but free texture changes. While the depth of the camouflaged object and that of the foreground object match, the appearance is that of the background.
Non-Negative Inpainting. In optical see-through AR, one might also wish to camouflage objects Luo et al. (2021) or inpaint them. However, in see-through AR one can only add light. Fig. 8 shows how adding light can make the appearance of camouflage in a 3D consistent manner.
3D Object Manipulation. We now consider 3D object manipulation. Fig. 9 shows two views of a fern. We have disentangled both the window mullion in the upper left corner and the tree trunk from the rest of the scene. Even though the window mullion is occluded in the first view, and thus our 2D mask is masking the occluding leaf in front of the window mullion, this occluding object is not part of the disentangled window mullion object. The 3D manipulations are shown in (c)-(e) in Fig. 9. For the strawberry manipulation in (e), note how part of the tree trunk was camouflaged to more closely resemble the shape of a strawberry. We compare to 2D text-based inpainting methods of GLIDE Nichol et al. (2021) and Blended Diffusion Avrahami et al. (2021), where we follow the same procedure as in Sec. 4.1. We consider a similar user study as detailed in Sec. 4.1, where Q1 is modified to: “How well was the object semantically manipulated according to the target text prompt?” and consider the fern scene of Fig. 9, for the text prompts of “strawberry” and “old tree”.
4.3 DISCUSSION AND LIMITATIONS
Our work has some limitations. When light from the background affects the foreground object, we correctly disentangle the illuminations on the object. However, when the object is a light source, we cannot completely disentangle the object as seen in Appendix Fig. 10 (a) and in the supplementary. Another limitation is with respect to the semantic manipulation of foreground objects. We found that manipulating too large objects results in an under-constrained optimization because the signal provided by CLIP is not sufficient. We also note that, while our work can handle noisy masks, we require masks for all training views. We leave the task of reducing the number of masks for future work. Further, artifacts may arise when the training views do not provide sufficient information to generate the foreground using correlated background effects of multi-view geometry. Our work is orthogonal to recent speed and generalization extensions of NeRF that could be combined with our method. The number of 2D masks required by our method is also upper bounded by the number of training views and so methods such as DietNeRF Jain et al. (2021b) and SinNeRF Xu et al. (2022) could be combined with our method to reduce the number of 2D masks required to even a single mask. Alternatively one can use zero-shot segmentation approaches such as Shin et al. (2022) to obtain masks in a zero-shot manner. In Appendix B, we consider, for the task of foreground object translation (Fig. 6), alternatives to the recombining method of Eq. (4). Lastly, we note that our method can handle noisy annotations of the foreground. In Appendix Fig. 10(c), we demonstrate the masks used for the leaves scene, which were extracted using an off-the-shelf segmentation algorithm.
5 CONCLUSION
In this work, we presented a framework for the volumetric disentanglement of foreground objects from a background scene. The disentangled foreground object is obtained by volumetrically subtracting a learned volume representation of the background with one from the entire scene. The foreground-background disentanglement adheres to object occlusions and background effects such as illumination and reflections. We established that our disentanglement facilitates separate control of color, depth, and transformations for both the foreground and background objects. This enables a wide range of applications going beyond object movement and placement, of which we have demonstrated those of object camouflage, non-negative generation, and 3D object manipulation.
B ADDITIONAL VISUALIZATIONS
As noted in the main text, Fig. 10 (a) shows the failure to remove a light source. In Fig. 10 (b1 to b4), we show, for the task of foreground object translation (Fig. 6), alternatives to the recombining method of Eq. (4), with (a2) c′full ir instead of c′fg ir , (a3) w′full ir instead of w′fg ir , (a4) ccr =
∑N i=1(w ′ bg ir +w′fg ir ) · (c′bg ir + c′fg ir ). Fig. 10 (c) shows examples of the noisy masks used for the leaf scene disentanglement which our method handles correctly.
C TRAINING MASKS
We provide a sample of the training masks used for training views in Fig. 11. | 1. What is the main contribution of the paper in terms of 3D object disentanglement and manipulation?
2. What are the strengths and weaknesses of the proposed method compared to prior works like ObjectNeRF?
3. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
4. Are there any questions regarding the paper's experiments, figures, or minor issues? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
The paper aims to disentangle and then manipulate objects in 3D given a set of images capturing the same object and its background scene. It operates in a NeRF setting, but additionally requires 2D object masks as input, and additionally outputs an object NeRF. The method is very similar to ObjectNeRF (Yang et al. ICCV21), and the major innovation is modeling object as "full 3D scene" subtracted by "background 3D scene", which is claimed to be better than modeling object directly. They show a few manipulation applications such as foreground 3D transformation, camouflage, 3D composition, etc. Compared to 2D manipulation baselines, the proposed method produces 3D-consistent results.
Strengths And Weaknesses
Strength
The problem of 3D object disentanglement and manipulation is interesting and well motivated.
The applications such as "non-negative inpainting" and "object camouflage" are interesting and creative.
The results are well presented as formatted videos in the supplement webpage.
Weaknesses
Clarity. When showing results, please show the 2D segmentation input. Otherwise, it is hard to appreciate the quality of the solution. If the segmentation is nearly perfect, the disentanglement problem is much less difficult. Furthermore, are the baselines (ObjectNeRF and 2D baselines) given the same 2D masks as the proposed method?
Writing. Overall, it is not clear what is the challenge the paper want to address. The challenge of disentanglement and manipulation is sparsely discussed in the intro at a high level, but I don't see why existing methods, such as ObjectNeRF cannot solve it. Similarly, in terms of new applications, I don't see how are these tied to the proposed "subtraction" solution.
Novelty/Quality. The paper's major technical innovation is modeling object as "full 3D scene" subtracted by "background 3D scene", which is claimed to be better than modeling object directly (ObjectNeRF paper). However, I don't see theoretical or practical evidence that backs it up. I would think modeling A="object" B="background" separately, and compose them as C="full scene" is equivalently challenging as modeling C and B separately, and then get A=C-B.
Experiments. In the comparison with ObjectNeRF in Fig. 4, it's hard to say which one is better. The proposed solution always segments only a subset of pixels belonging to the object, but ObjectNeRF is able to segment the full object.
Minor
Fig 5: It is not clear what does "visually enhanced by a 2D mask" mean. Is this the input to optimization or just added for visualization purpose?
Fig 6: The foreground object is almost a black image. I would suggest change to another color scheme, such as gray background.
Fig 7: The disparity rendering looks not accurate. What causes the vertical line artifacts?
Intro: "we consider the ability to affect semantic manipulations to the foreground" is not clear. "affect" seems out of place.
The paper [A] solves a similar problem without using segmentation masks as input. It is a concurrent work but would be good to discuss the difference.
[A] Kobayashi, Sosuke, Eiichi Matsumoto, and Vincent Sitzmann. "Decomposing NeRF for Editing via Feature Field Distillation." NeurIPS 2022.
Clarity, Quality, Novelty And Reproducibility
Reproducibility: It is not clear how is segmentation obtained. Beyond that, I believe the results are reproducible. For others, please see the strength and weaknesses. |
ICLR | Title
Robust Imitation Learning from Corrupted Demonstrations
Abstract
We consider offline Imitation Learning from corrupted demonstrations where a constant fraction of data can be noise or even arbitrary outliers. Classical approaches such as Behavior Cloning assumes that demonstrations are collected by an presumably optimal expert, hence may fail drastically when learning from corrupted demonstrations. We propose a novel robust algorithm by minimizing a Median-of-Means (MOM) objective which guarantees the accurate estimation of policy, even in the presence of constant fraction of outliers. Our theoretical analysis shows that our robust method in the corrupted setting enjoys nearly the same error scaling and sample complexity guarantees as the classical Behavior Cloning in the expert demonstration setting. Our experiments on continuous-control benchmarks validate that existing algorithms are fragile under corrupted demonstration while our method exhibits the predicted robustness and effectiveness.
1 INTRODUCTION
Recent years have witnessed the success of using autonomous agent to learn and adapt to complex tasks and environments in a range of applications such as playing games (e.g. Mnih et al., 2015; Silver et al., 2018; Vinyals et al., 2019), autonomous driving (e.g. Kendall et al., 2019; Bellemare et al., 2020), robotics (Haarnoja et al., 2017), medical treatment (e.g. Yu et al., 2019) and recommendation system and advertisement (e.g. Li et al., 2011; Thomas et al., 2017).
Previous success for sequential decision making often requires two key components: (1) a careful design reward function that can provide the supervision signal during learning and (2) an unlimited number of online interactions with the real-world environment (or a carefully designed simulator) to query new unseen region. However, in many scenarios, both components are not allowed. For example, it is hard to define the reward signal in uncountable many extreme situations in autonomous driving; and it is dangerous and risky to directly deploy a learning policy on human to gather information in autonomous medical treatment (Yu et al., 2019). Therefore an offline sequential decision making algorithm without reward signal is in demand.
Offline Imitation Learning (IL) offers an elegant way to train intelligent agents for complex task without the knowledge of reward functions or using a simulator. Since the offline imitation learning does not interact with the environment, in order to guide intelligent agents to correct behaviors, it is crucial to have high quality expert demonstrations. The well-known Behavior Cloning (BC) algorithm (Pomerleau, 1988) requires that the demonstrations given for training are all presumably optimal and it aims to learn that mapping from state to action from expert demonstration data set.
However in real world scenario, since the demonstration is often collected from human, we cannot guarantee that all the demonstrations we collected have high quality. An human expert can make mistakes by accident or due to the hardness of a complicated scenario (e.g., medical diagnosis). Furthermore, even an expert demonstrates a successful behavior, the recorder or the recording system can have a chance to contaminate the data by accident or on purpose (e.g. Eykholt et al., 2018; Neff & Nagy, 2016).
This leads to the central question of the paper:
Can the optimality assumption on expert demonstrations be weakened or even tolerate arbitrary outliers under offline imitation learning settings?
More concretely, we consider corrupted demonstrations setting where the majority of the demonstration data is collected by an expert policy (presumably optimal), and the remaining data can be even arbitrary outliers (the formal definition is presented in Definition 2.1). This has great significance in many applications, such as automated medical diagnosis for healthcare (Yu et al. (2019)) and autonomous driving (Ma et al., 2018), where the historical data (demonstration) is often complicated and noisy which requires robustness consideration.
However, the classical offline imitation learning approaches such as Behavior Cloning (BC) fails drastically under this corrupted demonstration settings. We illustrated this phenomenon in Figure 1. We use BC on a continuous control environment, and the performance of the policy learned by BC drops drastically as the fraction of corruptions increases in the offline demonstration data set. However, our proposed algorithm – Robust Behavior Cloning (Algorithm 1) – is resilient to corruptions in the offline demonstrations. The detailed experimental setup is included in Section 5. We now summarize our contributions as follows.
1.1 MAIN CONTRIBUTIONS
• (Algorithm) We consider robustness in offline imitation learning where we have corrupted demonstrations. Our definition for corrupted demonstrations significantly weakens the presumably optimal assumption on demonstration data, and can tolerate a constant -fraction of state-action pairs to be arbitrarily corrupted. We refer to Definition 2.1 for a more precise statement. To deal with this issue, we propose a novel algorithm Robust Behavior Cloning (Algorithm 1) for robust imitation learning. Our algorithm works in the offline setting, without any further interaction with the environment. The core ingredient of our robust algorithm is using a novel median of means objective in policy estimation compared to classical Behavior Cloning. Hence, it’s simple to implement, and computationally efficient.
• (Theoretical guarantees) We analyze our Robust Behavior Cloning algorithm when there exists a constant fraction of outliers in the demonstrations under the offline setting. We show that our RBC achieves nearly the same error scaling and sample complexity compared to vanilla BC with expert
demonstrations. To this end, our algorithm guarantees robustness to corrupted demonstrations at no cost of statistical error. This is the content of Section 4.
• (Empirical support) We validate the predicted robustness and show the effectiveness of our algorithm on different high-dimensional continuous control benchmarks – the vanilla BC is fragile indeed with corrupted demonstrations, and our Robust Behavior Cloning achieves nearly the same performance compared to vanilla BC with expert demonstrations. This is the content of Section 5.
2 PROBLEM SETUP
2.1 REINFORCEMENT LEARNING AND IMITATION LEARNING
Markov Decision Process and Reinforcement Learning. We start the problem setup by introducing the Markov decision process (MDP). An MDP M = 〈S,A, r,P, µ0, γ〉 consists of a state space S, an action space A, an unknown reward function r : S × A → [0,Rmax], an unknown transition kernel P : S × A → ∆(S), an initial state distribution µ0 ∈ ∆(S), and a discounted factor γ ∈ (0, 1). We use ∆ to denote the probability distributions on the simplex. An agent acts in a MDP following a policy π(·|s), which prescribes a distribution over the action space A given each state s ∈ S. Running the policy starting from the initial distribution s1 ∼ µ0 yields a stochastic trajectory T := {st,at, rt}1≤t≤∞, where st,at, rt represent the state, action, reward at time t respectively, with at ∼ π(·|st) and the next state st+1 follows the unknown transition kernel st+1 ∼ P(·|st,at). We denote ρπ,t ∈ ∆(S × A) as the marginal joint stationary distribution for state, action at time step t, and we define ρπ = (1 − γ) ∑∞ i=1 γ
tρπ,t as visitation distribution for policy π. For simplicity, we reuse the notation ρπ(s) = ∫ a∈A ρπ(s, a)da to denote the marginal distribution over state.
The goal of reinforcement learning is to find the best policy π to maximize the expected cumulative return Jπ = ET ∼π [ ∑∞ i=1 γ
trt]. Common RL algorithms (e.g., please refer to Szepesvári (2010)) requires online interaction and exploration with the environments. However, this is prohibited in the offline setting.
Imitation Learning. Imitation learning (IL) aims to obtain a policy to mimic expert’s behavior with demonstration data set D = {(si,ai)}Ni=1 where N is the sample size of D. Note that we do not need any reward signal. Tradition imitation learning assumes perfect (or near-optimal) expert demonstration – for simplification we assume that each state-action pair (si,ai) is drawn from the joint stationary distribution of an expert policy πE :
(si,ai) ∼ ρπE (1)
Learning from demonstrations with or without online interactions has a long history (e.g., Pomerleau (1988); Ho & Ermon (2016)). The goal of offline IL is to learn a policy π̂IL = A(D) through an IL algorithm A, given the demonstration data set D, without further interaction with the unknown true transition dynamic P.
Behavior Cloning. The Behavior Cloning (BC) is the well known algorithm (Pomerleau, 1988) for IL which only uses offline demonstration data without any interaction with the environment. More specifically, BC solves the following Maximum Likelihood Estimation (MLE) problem, which minimizes the average Negative Log-Likelihood (NLL) for all samples in offline demonstrations D:
π̂BC = arg min π∈Π
1
N ∑ (s,a)∈D − log(π(a|s)) (2)
Recent works (Agarwal et al., 2019; Rajaraman et al., 2020; Xu et al., 2021) have shown that BC is optimal under the offline setting, and can only be improved with the knowledge of transition dynamic P in the worst case. Also, another line of research considers improving BC with further online interaction of the environment (Brantley et al., 2019) or actively querying an expert (Ross et al., 2011; Ross & Bagnell, 2014).
2.2 LEARNING FROM CORRUPTED DEMONSTRATIONS
However, it is sometimes unrealistic to assume that the demonstration data set is collected through a presumably optimal expert policy. In this paper, we propose Definition 2.1 for the corrupted demonstrations, which tolerates gross corruption or model mismatch in offline data set.
Definition 2.1 (Corrupted Demonstrations). Let the state-action pair (si,ai)Ni=1 drawn from the joint stationary distribution of a presumably optimal expert policy πE . The corrupted demonstration data D are generated by the following process: an adversary can choose an arbitrary -fraction ( < 0.5) of the samples in [N ] and modifies them with arbitrary values. We note that is a constant independent of the dimensions of the problem. After the corruption, we useD to denote the corrupted demonstration data set.
This corruption process can represent gross corruptions or model mismatch in the demonstration data set. To the best of our knowledge, Definition 2.1 is the first definition for corrupted demonstrations in imitation learning which tolerates arbitrary corruptions.
In the supervised learning, the well-known Huber’s contamination model (Huber (1964)) considers (x, y)
iid∼ (1− )P+ Z,where x ∈ Rd is the explanatory variable (feature) and y ∈ R is the response variable. Here, P denotes the authentic statistical distribution such as Normal mean estimation or linear regression model, and Z denotes the outliers.
Dealing with corrupted x and y in high dimensions has a long history in the robust statistics community (e.g. Rousseeuw, 1984; Chen et al., 2013; 2017; Yin et al., 2018). However, it’s only until recently that robust statistical methods can handle constant -fraction (independent of dimensionality Rd) of outliers in x and y (Klivans et al., 2018; Prasad et al., 2020; Diakonikolas et al., 2019; Liu et al., 2019; 2020; Shen & Sanghavi, 2019; Lugosi & Mendelson, 2019; Lecué & Lerasle, 2020; Jalal et al., 2020). We note that in Imitation Learning, the data collecting process for the demonstrations does not obey i.i.d. assumption in traditional supervised learning due to the temporal dependency.
Notations. Throughout this paper, we use {ci}i=1,2,3 to denote the universal positive constant. We utilize the big-O notation f(n) = O(g(n)) to denote that there exists a positive constant c1 and a natural number n0 such that, for all n ≥ n0, we have f(n) ≤ c1g(n).
3 OUR ALGORITHMS
It is well known that the Median-of-Means (MOM) estimator achieves sub-Gaussian concentration bound for one-dimensional mean estimation even though the underlying distribution only has second moment bound (heavy tailed distribution) (interested readers are referred to textbooks such as Nemirovsky & Yudin (1983); Jerrum et al. (1986); Alon et al. (1999)).
The vanilla MOM estimator for one-dimensional mean estimation works like following: (1) randomly partition N samples into M batches; (2) calculates the mean for each batch; (3) outputs the median of these batch mean. Very recently, MOM estimators are used for high dimensional robust regression (Brownlees et al., 2015; Hsu & Sabato, 2016) by applying MOM estimator on the loss function of empirical risk minimization process.
3.1 ROBUST BEHAVIOR CLONING
Motivated by using MOM estimators on the loss function, we propose Definition 3.1 which uses a MOM objective to handle arbitrary outliers in demonstration data set (s,a) ∈ D. Definition 3.1 (Robust Behavior Cloning). We split the corrupted demonstrationsD intoM batches randomly1: {Bj}Mj=1, with the batch size b ≤ 13 . The Robust Behavior Cloning solves the following optimization
π̂RBC = arg min π∈Π max π′∈Π median 1≤j≤M (`j(π)− `j(π′)) , (3)
1Without loss of generality, we assume that M exactly divides the sample size N , and b = N M is the batch size.
Algorithm 1 Robust Behavior Cloning. 1: Input: Corrupted demonstrations D 2: Output: Robust policy π̂RBC
3: Initialize π and π′. 4: for t = 0 to T − 1, do 5: Randomly partition D to M batches with the batch size b ≤ 13 . 6: For each batch j ∈ [M ], calculate the loss `j(π)− `j(π′) by eq. (4). 7: Pick the batch with median loss within M batches
median 1≤j≤M
(`j(π)− `j(π′)) ,
and evaluate the gradient for π and π′ using back-propagation on that batch (i) perform gradient descent on π. (ii) perform gradient ascent on π′.
8: end for 9: Return: Robust policy π̂RBC = π.
where the loss function `j(π) is the average Negative Log-Likelihood in the batch Bj:
`j(π) = 1
b ∑ (s,a)∈Bj − log(π(a|s)). (4)
The workhorse of Definition 3.1 is eq. (3), which uses a novel variant of Median-of-Means (MOM) tournament procedure (Le Cam (2012); Lugosi & Mendelson (2019); Lecué & Lerasle (2020); Jalal et al. (2020)). In eq. (4), we calculate the average Negative Log-Likelihood (NLL) for a single batch, and π̂RBC is the solution of a min-max formulation based on the batch loss `j(π). Though our algorithm minimizes the robust version of NLL, we do not utilize the traditional iid assumption in the supervised learning.
To gain some intuition of the formulation eq. (3), if we replace the median operator by the mean operator, then RBC is equivalent to BC which just minimizes the empirical average of Negative Log-Likelihood. This is due to the linearity of using the mean operator. However, this is not robust to corrupted demonstration. Hence, we use the median operator on the loss function.
The intuition behind solving this min-max formulation is that the inner variable π′ needs to get close to πE to maximize the difference of loss function, and the outer variable π also need to get close to πE. Hence we can guarantee that π̂RBC will be close to πE. In Section 4, we show that under corrupted demonstrations, π̂RBC in eq. (3) has the same error scaling and sample complexity compared to πE.
In Section 4, we provide rigorous statistical guarantees for Definition 3.1. However, the objective function eq. (3) in Definition 3.1 is not convex (in general), hence we use Algorithm 1 as a computational heuristic to solve it.
In each iteration of Algorithm 1, we randomly partition the demonstration data setD intoM batches, and calculate the loss `j(π)− `j(π′) by eq. (4). We then pick the batch BMed with the median loss, and evaluate the gradient on that batch. We use gradient descent on π for the arg min part and gradient ascent on π′ for the arg max part. In Section 5, we empirically show that this gradientbased heuristic Algorithm 1 is able to minimize this objective and has good convergence properties. As for the time complexity, when using back-propagation on one batch of samples, our RBC incurs overhead costs compared to vanilla BC, in order to evaluate the loss function for all samples via forward propagation.
4 THEORETICAL ANALYSIS
In this section, we provide theoretical guarantees for our RBC algorithm. Since our method (Definition 3.1) directly estimates the conditional probability π(a|s) over the offline demonstrations, our theoretical analysis provides guarantees on Es∼ρπE ∥∥π̂RBC(·|s)− πE(·|s)∥∥2TV, which upper bounds the total variation norm compared to πE under the expectation of s ∼ ρπE . The ultimate goal of the learned policy is to maximize the expected cumulative return, thus we then provide an upper bound for the sub-optimality JπE − Jπ̂RBC . We begin the theoretical analysis by Assumption 4.1, which simplifies our analysis and is common in literature (Agarwal et al., 2019; 2020). By assuming that the policy class Π is discrete, our upper bounds depend on the quantity log(|Π|)/N , which matches the error rates and sample complexity for using BC with expert demonstrations (Agarwal et al., 2019; 2020). Assumption 4.1. We assume that the policy class Π is discrete, and realizable, i.e., πE ∈ Π.
We first present Theorem 4.1, which shows that minimizing the MOM objective via eq. (3) guarantees the closeness of robust policy to optimal policy in total variation distance. Theorem 4.1. Suppose we have corrupted demonstration data set D with sample size N from Definition 2.1, and there exists a constant corruption ratio < 0.5. Under Assumption 4.1, let τ to be the output objective value with π̂RBC in the optimization eq. (3) with the batch size b ≤ 13 , then with probability at least 1− c1δ, we have
Es∼ρπE ∥∥π̂RBC(·|s)− πE(·|s)∥∥2TV = O( log(|Π|/δ)N + τ ) . (5)
The proof is collected in Appendix A. We note that the data collection process does not follow the iid assumption, hence we use martingale analysis similar to (Agarwal et al., 2019; 2020). The first part of eq. (5) is the statistical error log(|Π|/δ)N . The second part is the final objective value in the optimization eq. (3) τ which includes two parts – the first part scales with O( 1b ), which is equivalent to the fraction of corruption O( ). The second part is the sub-optimality gap due to the solving the non-convex optimization.
Our main theorem – Theorem 4.1 – guarantees that a small value of the final objective implies an accurate estimation of policy and hence we can certify estimation quality using the obtained final value of the objective.
Next, we present Theorem 4.2, which guarantees the reward performance of the learned robust policy π̂RBC. Theorem 4.2. Under the same setting as Theorem 4.1, we have
JπE − Jπ̂RBC ≤ O
( 1
(1− γ)2
√ log(|Π|/δ)
N + τ
) , (6)
with probability at least 1− c1δ.
The proof is also collected in Appendix A. We note that the error scaling and sample complexity of Theorem 4.1 and Theorem 4.2 matches the vanilla BC with expert demonstrations (Agarwal et al., 2019; 2020). Remark 4.1. The quadratic dependency on the effective horizon ( 1(1−γ)2 in the discounted setting or H2 in the episodic setting) is widely known as the compounding error or distribution shift in literature, which is due to the essential limitation of offline imitation learning setting. Recent work (Rajaraman et al., 2020; Xu et al., 2021) shows that this quadratic dependency cannot be improved without any further interaction with the environment or the knowledge of transition dynamic P. Hence BC is actually optimal under no-interaction setting. Also, a line of research considers improving BC by further online interaction with the environment or even active query of the experts (Ross et al., 2011; Brantley et al., 2019; Ross & Bagnell, 2014). Since our work, as a robust counterpart of BC, focuses on the robustness to the corruptions in the offline demonstrations setting, it can be naturally used in online the online setting such as DAGGER (Ross et al., 2011) and Brantley et al. (2019).
5 EXPERIMENTS
In this section, we study the empirical performance of our Robust Behavior Cloning. We evaluate the robustness of Robust Behavior Cloning on several continuous control benchmarks simulated by PyBullet Coumans & Bai (2016) simulator: HopperBulletEnv-v0, Walker2DBulletEnv-v0, HalfCheetahBulletEnv-v0 and AntBulletEnv-v0. Actually, these tasks have true reward function already in the simulator. We will use only state observation and action for the imitation algorithm, and we then use the reward to evaluate the obtained policy when running in the simulator.
For each task, we collect the presumably optimal expert trajectories using pre-trained agents from Standard Baselines32. In the experiment, we use Soft Actor-Critic (Haarnoja et al. (2018)) in the Standard Baselines3 pre-trained agents, and we consider it to be an expert.
For the continuous control environments, the action space are bounded. Hence we generate corrupted demonstration data set D as follows: we first randomly choose fraction of samples, and corrupt the action to the boundary (normally−1 or +1). We note that Definition 2.1 allows for arbitrary corruptions, and we choose these outliers’ action since it has the maximum effect, and cannot be easily detected.
We compare our RBC algorithm (Algorithm 1) to a number of natural baselines: the first baseline is directly using BC on the corrupted demonstration D without any robustness consideration. The second one is using BC on the expert demonstrations with the same sample size. In different settings, we fix the policy network as 2 hidden layer feed-forward Neural Network of size {500, 500} with ReLU activation, which is standard in the baselines.
5.1 CONVERGENCE OF OUR ALGORITHM
We illustrate the convergence and the performance of our algorithm to support our theoretical analysis. We track the performance metric of different algorithms vs. epoch number in the whole training process. More specifically, we then evaluate current policy in the simulator for 20 trials, and obtain the mean and standard deviation of cumulative reward for each epoch. This metric corresponds to theoretical bounds in Theorem 4.2.
We focus on four continuous control environments, where the observation space has dimensions around 30, and the action space has boundary [−1, 1]. We fix the sample size as 60000, and vary the corruption fraction to be 10%, 20%. Figure 2 validates our theory that our Robust Behavior Cloning nearly matches the performance of BC on expert demonstrations for different environments and corruption ratio.
5.2 PERFORMANCE UNDER DIFFERENT SETUPS
This is the experiment we have shown in Section 1. In the Lunar Lander control environment, we fix the sample size N = 4000, and then vary the fraction of corruptions . As expected, Figure 1 shows that our RBC is resilient to a constant-fraction of outliers in the demonstrations ranging from 0 to 30%, and it achieves nearly the same performance as the BC on expert demonstrations. In contrast, directly using BC on corrupted demonstrations obtains worse reward performance as the fraction of outliers grows.
6 DISCUSSIONS
6.1 RELATED WORK
Imitation Learning. Behavior Cloning (BC) is the most widely-used imitation learning algorithm (Pomerleau, 1988; Osa et al., 2018) due to its simplicity, effectiveness and scalability, and has been widely used in practice. From a theoretical viewpoint, it has been showed that BC achieves informational optimality in the offline setting (Rajaraman et al., 2020) with no further online interactions or the knowledge of the transition dynamic P.
2The pre-trained agents were cloned from the following repositories: https://github.com/ DLR-RM/stable-baselines3, https://github.com/DLR-RM/rl-baselines3-zoo.
With online interaction, there’s a line of research focusing on improving BC in different scenarios – for example, Ross et al. (2011) proposed DAgger (Data Aggregation) by querying the expert policy in the online setting. Brantley et al. (2019) proposed using an ensemble of BC as uncertainty measure and interacts with the environment to improve BC by taking the uncertainty into account, without the need to query the expert. Very recently, (Xu et al., 2021; Rajaraman et al., 2021) leveraged the knowledge of the transition dynamic P to eliminate compounding error/distribution shift issue in BC.
Besides BC, there are other imitation learning algorithms: Ho & Ermon (2016) used generative adversarial networks for distribution matching to learn a reward function; Reddy et al. (2019) provided a reinforcement learning framework to deal with imitation learning by artificially setting the reward; Ghasemipour et al. (2020) unified several existing imitation learning algorithm as minimizing distribution divergence between learned policy and expert demonstration, just to name a few.
Offline RL. Reinforcement learning leverages the signal from reward function to train the policy. Different from IL, offline RL often does not require the demonstration to be expert demonstration (e.g. Fujimoto et al., 2019; Fujimoto & Gu, 2021; Kumar et al., 2020) (interested readers are referred to (Levine et al., 2020)), and even expects the offline data with higher coverage for different suboptimal policies (Buckman et al., 2020; Jin et al., 2021; Rashidinejad et al., 2021). Behavior-agnostic setting (Nachum et al., 2019; Mousavi et al., 2020) even does not require the collected data from a single policy.
The closest relation between offline RL and IL is the learning of stationary visitation distribution, where learning such visitation distribution does not involve with reward signal, similar to IL. A line of recent research especially for off-policy evaluation tries to learn the stationary visitation distribution of a given target policy (e.g. Liu et al., 2018; Nachum et al., 2019; Tang et al., 2020; Mousavi et al., 2020; Dai et al., 2020). Especially Kostrikov et al. (2020) leverages the off-policy evaluation idea to IL area.
Robustness in IL and RL. There are several recent papers consider corruption-robust in either RL or IL. In RL, Zhang et al. (2021b) considers that the adversarial corruption may corrupt the whole episode in the online RL setting while a more recent one (Zhang et al., 2021a) considers offline RL where -fraction of the whole data set can be replaced by the outliers. However, the dependency scales with the dimension in Zhang et al. (2021a), yet can be a constant in this paper for robust offline IL. Many other papers consider perturbations, heavy tails, or corruptions in either reward function (Bubeck et al., 2013) or in transition dynamic (Xu & Mannor, 2012; Tamar et al., 2014; Roy et al., 2017).
The most related papers follow a similar setting of robust IL are (Wu et al., 2019; Tangkaratt et al., 2020; 2021; Brown et al., 2019; Sasaki & Yamashina, 2020), where they consider imperfect or noisy observations in imitation learning. However, their algorithms cannot handle outliers in the demonstrations, and (Wu et al., 2019; Tangkaratt et al., 2020; 2021) require additional online interactions with the environment. Our algorithm achieves robustness guarantee from purely offline demonstration, without the potentially costly or risky interaction with the real world environment.
6.2 SUMMARY AND FUTURE WORKS
In this paper, we considered the corrupted demonstrations issues in imitation learning, and proposed a novel robust algorithm, Robust Behavior Cloning, to deal with the corruptions in offline demonstration data set. The core technique is replacing the vanilla Maximum Likelihood Estimation with a Median-of-Means (MOM) objective which guarantees the policy estimation and reward performance in the presence of constant fraction of outliers. Our algorithm has strong robustness guarantees and works well in practice.
There are several avenues for future work: since our work focuses on the corruption in offline data set, any improvement in online imitation learning which utilizes Behavior Cloning would benefit from the corruption-robustness guarantees by our offline Robust Behavior Cloning. Also, it would also be of interest to apply our algorithm for real-world environment, such as automated medical diagnosis and autonomous driving.
A PROOFS
The analysis of maximum likelihood estimation is standard in i.i.d. setting for the supervised learning setting (van de Geer, 2000). In our proofs of the robust offline imitation learning algorithm, the analysis for the sequential decision making leverages the martingale analysis technique from (Zhang, 2006; Agarwal et al., 2020).
Our Robust Behavior Cloning (Definition 3.1) solves the following optimization
π̂RBC = arg min π∈Π max π′∈Π median 1≤j≤M (`j(π)− `j(π′)) , (7)
where the loss function `j(π) is the average Negative Log-Likelihood in the batch Bj :
`j(π) = 1
b ∑ (s,a)∈Bj − log(π(a|s)). (8)
This can be understood as a robust counterpart for the maximum likelihood estimation in sequential decision process.
With a slight abuse of notation, we use xi and yi to denote the observation and action, and the underlying unknown expert distribution is yi ∼ p(·|xi) and p(y|x) = f∗(x, y). Following Assumption 4.1, we have the realizable f∗ ∈ F , and the discrete function class satisfies |F| <∞.
Let D denote the data set and let D′ denote a tangent sequence {x′i, y′i} |D| i=1. The tangent sequence is defined as x′i ∼ Di(x1:i−1, y1:i−1) and y′i ∼ p(·|x′i). Note here that x′i follows from the distribution Di, and depends on the original sequence, hence the tangent sequence is independent conditional on D.
For this martingale process, we first introduce a decoupling Lemma from Agarwal et al. (2020).
Lemma A.1. [Lemma 24 in Agarwal et al. (2020)] Let D be a dataset, and let D′ be a tangent sequence. Let Γ(f,D) = ∑ (x,y)∈D φ(f, (x, y)) be any function which can be decomposed additively
across samples inD. Here, φ is any function of f and sample (x, y). Let f̂ = f̂(D) be any estimator taking the dataset D as input and with range F . Then we have
ED [ exp ( Γ(f̂ ,D)− logED′ exp(Γ(f̂ ,D′))− log |F| )] ≤ 1.
Then we present a Lemma which upper bounds the TV distance via a loss function closely related to KL divergence. Such bounds for probabilistic distributions are discussed extensively in literature such as Tsybakov (2009).
Lemma A.2. [Lemma 25 in Agarwal et al. (2020)] For any two conditional probability densities f1, f2 and any state distribution D ∈ ∆(X ) we have
Ex∼D‖f1(x, ·)− f2(x, ·)‖2TV ≤ −2 logEx∼D,y∼f2(·|x) exp ( −1
2 log
f2(x, y)
f1(x, y)
) .
A.1 PROOF OF THEOREM 4.1
With these Lemmas in hand, we are now equipped to prove our main theorem (Theorem 4.1), which guarantees the solution π̂RBC of eq. (3) is close to the optimal policy πE in TV distance.
Theorem A.1 (Theorem 4.1). Suppose we have corrupted demonstration data set D with sample size N from Definition 2.1, and there exists a constant corruption ratio < 0.5. Under Assump-
tion 4.1, let τ to be the output objective value with π̂RBC in the optimization eq. (3) with the batch size b ≤ 13 , then with probability at least 1− c1δ, we have
Es∼ρπE ∥∥π̂RBC(·|s)− πE(·|s)∥∥2TV = O( log(|Π|/δ)N + τ ) .
Proof of Theorem 4.1. En route to the proof of Theorem 4.1, we keep using the notations in Lemma A.1 and Lemma A.2, where the state observation is x, the action is y, and the discrete function class is F .
Similar to Agarwal et al. (2020), we first note that Lemma A.1 can be combined with a simple Chernoff bound to obtain an exponential tail bound. With probability at least 1− c1δ, we have
− logED′ exp(Γ(f̂ ,D′)) ≤ −Γ(f̂ ,D) + log |F|+ log(1/δ). (9)
Our proof technique relies on lower bounding the LHS of eq. (9), and upper bounding the RHS eq. (9).
Let the batch size b ≤ 13 , which is a constant in Definition 3.1, then the number of batches M ≥ 3 N such that there exists at least 66% batches without corruptions.
In the definition of RBC (Definition 3.1), we solve
π̂RBC = arg min π∈Π max π′∈Π median 1≤j≤M (`j(π)− `j(π′)) . (10)
Notice that since πE is one feasible solution of the inner maximization step eq. (10), we can choose π′ = πE. Now we consider the objective function which is the difference of Negative Log-Likelihood between f and f∗, i.e., `j(f)− `j(f∗), defined in eq. (4) where
`j(π) = 1
b ∑ (s,a)∈Bj − log(π(a|s)).
Hence, we choose Γ(f,D) in Lemma A.1 as
Γj(f,D) = N
b ∑ i∈Bj −1 2 log f∗(xi, yi) f(xi, yi)
= N
2b ∑ i∈Bj (log f(xi, yi)− log f∗(xi, yi)) ,
which is the difference of Negative Log-LikelihoodN(`j(f∗)−`j(f))/2 evaluated on a single batch Bj , j ∈ [M ]. This is actually the objective function on a single batch appeared in eq. (3).
Lower bound for the LHS of eq. (9). We apply the concentration bound eq. (9) for such uncorrupted batches, hence the majority of all batches satisfies eq. (9). For those batches, the LHS of
eq. (9) can be lower bounded by the TV distance according to Lemma A.2.
− logED′ exp N b ∑ i∈Bj −1 2 log ( f?(x′i, y ′ i) f̂(x′i, y ′ i) )∣∣D
(i) = −N
b ∑ i∈Bj logEx,y∼Di exp
( −1
2 log
f?(x, y)
f̂(x, y) ) (ii)
≥ N 2b ∑ i∈Bj Ex∼Di ∥∥∥f̂(x, ·)− f?(x, ·)∥∥∥2 TV , (11)
where (i) follows from the independence between f̂ andD′ due to the decoupling technique, and (ii) follows from Lemma A.2, which is an upper bound of the Total Variation distance.
Upper bound for the RHS of eq. (9). Note that the objective is the median of means of each batches and f∗ is one feasible solution of the inner maximization step eq. (10). Since τ is the output objective value with π̂RBC in the optimization eq. (3), this implies that `Med(π)− `Med(π′) ≤ τ for the median batch BMed, which is equivalent to −ΓMed(f,D) ≤ Nτ/2.
Hence for the median batch BMed, the RHS of eq. (9) can be upper bounded by
−ΓMed(f̂ ,D) + log |F|+ log(1/δ) ≤ log |F|+ log(1/δ) +Nτ/2. (12)
Putting together the pieces eq. (11) and eq. (12) for BMed, we have Es∼ρπE ∥∥π̂RBC(·|s)− πE(·|s)∥∥2TV = O( log(|F|/δ)N + τ ) ,
with probability at least 1− c1δ.
A.2 PROOF OF THEOREM 4.2
With the supervised learning guarantees Theorem 4.1 in hand, which provides an upper bound for Es∼ρπE ∥∥π̂RBC(·|s)− πE(·|s)∥∥2TV, we are now able to present the suboptimality guarantee of the reward for π̂RBC. This bound directly corresponds to the reward performance of a policy.
Theorem A.2 (Theorem 4.2). Under the same setting as Theorem 4.1, we have
JπE − Jπ̂RBC ≤ O
( 1
(1− γ)2
√ log(|F|/δ)
N + τ
) ,
with probability at least 1− c1δ.
Proof of Theorem 4.2. This part is similar to Agarwal et al. (2019), and we have
(1− γ)(JπE − Jπ̂RBC) = Es∼ρπE Ea∼πE(·|s)A π̂RBC(s,a)
≤ 1 1− γ √ Es∼ρπE ‖π̂ RBC(·|s)− πE(·|s)‖21
= 2
1− γ
√ Es∼ρπE ‖π̂ RBC(·|s)− πE(·|s)‖2TV,
where we use the fact that sups,a,π|Aπ(s,a)| ≤ 11−γ for the advantage function and the reward is always bounded between 0 and 1.
Combining Theorem 4.1, we have
JπE − Jπ̂RBC ≤ O
( 1
(1− γ)2
√ log(|F|/δ)
N + τ
) ,
with probability at least 1− c1δ. | 1. What is the main contribution of the paper in the field of imitation learning?
2. What are the strengths of the proposed algorithm, particularly in terms of its robustness and simplicity?
3. What are the weaknesses of the paper, especially regarding the assumptions made in the theorems?
4. How does the reviewer assess the novelty and relevance of the paper compared to prior works, such as [1]?
5. Are there any questions or concerns regarding the empirical results presented in the paper? | Summary Of The Paper
Review | Summary Of The Paper
This paper proposes the definition of corrupted demonstrations and a new robust algorithm for offline imitation learning from corrupted demonstrations. The main idea to guarantee the robustness is median-of-means (MOM), which is used for high-dimensional robust regression. The authors propose an optimization formula based on MOM. The proposed algorithm, named robust behavior cloning (RBC), solves the min-max-median optimization to train two policies
π
and
π
′
.
Under the assumption that the policy class is discrete, two meaningful theorems are provided. The first theorem shows that the policy obtained from RBC is close to the expert policy with high probability. The second theorem shows the degree of suboptimality measured by the difference between the expected return of the policy obtained from RBC and the expert policy.
Finally, the authors provide empirical results on two continuous control benchmarks. In both domains, the proposed algorithm achieves competitive performance with BC on expert demos and outperforms BC on corrupted demonstrations by a large margin.
Review
Strength:
The authors provide theoretical and empirical analysis that the proposed algorithm is simple yet effective.
Weakness:
In the theorems, the authors assume that a small enough
ϵ
exists. However, with very small
ϵ
, BC also performs well. It will be a stronger statement when the
ϵ
is included in the bounds, not assumption.
I think the previous work [1] is very similar to this work. This paper also proposes a robust (offline) IL algorithm from the noisy expert demonstration. Because this work is applicable in the offline, some discussions or empirical comparisons should be provided.
[1] Sasaki, Fumihiro, and Ryota Yamashina. "Behavioral Cloning from Noisy Demonstrations." ICLR. 2021. |
ICLR | Title
Robust Imitation Learning from Corrupted Demonstrations
Abstract
We consider offline Imitation Learning from corrupted demonstrations where a constant fraction of data can be noise or even arbitrary outliers. Classical approaches such as Behavior Cloning assumes that demonstrations are collected by an presumably optimal expert, hence may fail drastically when learning from corrupted demonstrations. We propose a novel robust algorithm by minimizing a Median-of-Means (MOM) objective which guarantees the accurate estimation of policy, even in the presence of constant fraction of outliers. Our theoretical analysis shows that our robust method in the corrupted setting enjoys nearly the same error scaling and sample complexity guarantees as the classical Behavior Cloning in the expert demonstration setting. Our experiments on continuous-control benchmarks validate that existing algorithms are fragile under corrupted demonstration while our method exhibits the predicted robustness and effectiveness.
1 INTRODUCTION
Recent years have witnessed the success of using autonomous agent to learn and adapt to complex tasks and environments in a range of applications such as playing games (e.g. Mnih et al., 2015; Silver et al., 2018; Vinyals et al., 2019), autonomous driving (e.g. Kendall et al., 2019; Bellemare et al., 2020), robotics (Haarnoja et al., 2017), medical treatment (e.g. Yu et al., 2019) and recommendation system and advertisement (e.g. Li et al., 2011; Thomas et al., 2017).
Previous success for sequential decision making often requires two key components: (1) a careful design reward function that can provide the supervision signal during learning and (2) an unlimited number of online interactions with the real-world environment (or a carefully designed simulator) to query new unseen region. However, in many scenarios, both components are not allowed. For example, it is hard to define the reward signal in uncountable many extreme situations in autonomous driving; and it is dangerous and risky to directly deploy a learning policy on human to gather information in autonomous medical treatment (Yu et al., 2019). Therefore an offline sequential decision making algorithm without reward signal is in demand.
Offline Imitation Learning (IL) offers an elegant way to train intelligent agents for complex task without the knowledge of reward functions or using a simulator. Since the offline imitation learning does not interact with the environment, in order to guide intelligent agents to correct behaviors, it is crucial to have high quality expert demonstrations. The well-known Behavior Cloning (BC) algorithm (Pomerleau, 1988) requires that the demonstrations given for training are all presumably optimal and it aims to learn that mapping from state to action from expert demonstration data set.
However in real world scenario, since the demonstration is often collected from human, we cannot guarantee that all the demonstrations we collected have high quality. An human expert can make mistakes by accident or due to the hardness of a complicated scenario (e.g., medical diagnosis). Furthermore, even an expert demonstrates a successful behavior, the recorder or the recording system can have a chance to contaminate the data by accident or on purpose (e.g. Eykholt et al., 2018; Neff & Nagy, 2016).
This leads to the central question of the paper:
Can the optimality assumption on expert demonstrations be weakened or even tolerate arbitrary outliers under offline imitation learning settings?
More concretely, we consider corrupted demonstrations setting where the majority of the demonstration data is collected by an expert policy (presumably optimal), and the remaining data can be even arbitrary outliers (the formal definition is presented in Definition 2.1). This has great significance in many applications, such as automated medical diagnosis for healthcare (Yu et al. (2019)) and autonomous driving (Ma et al., 2018), where the historical data (demonstration) is often complicated and noisy which requires robustness consideration.
However, the classical offline imitation learning approaches such as Behavior Cloning (BC) fails drastically under this corrupted demonstration settings. We illustrated this phenomenon in Figure 1. We use BC on a continuous control environment, and the performance of the policy learned by BC drops drastically as the fraction of corruptions increases in the offline demonstration data set. However, our proposed algorithm – Robust Behavior Cloning (Algorithm 1) – is resilient to corruptions in the offline demonstrations. The detailed experimental setup is included in Section 5. We now summarize our contributions as follows.
1.1 MAIN CONTRIBUTIONS
• (Algorithm) We consider robustness in offline imitation learning where we have corrupted demonstrations. Our definition for corrupted demonstrations significantly weakens the presumably optimal assumption on demonstration data, and can tolerate a constant -fraction of state-action pairs to be arbitrarily corrupted. We refer to Definition 2.1 for a more precise statement. To deal with this issue, we propose a novel algorithm Robust Behavior Cloning (Algorithm 1) for robust imitation learning. Our algorithm works in the offline setting, without any further interaction with the environment. The core ingredient of our robust algorithm is using a novel median of means objective in policy estimation compared to classical Behavior Cloning. Hence, it’s simple to implement, and computationally efficient.
• (Theoretical guarantees) We analyze our Robust Behavior Cloning algorithm when there exists a constant fraction of outliers in the demonstrations under the offline setting. We show that our RBC achieves nearly the same error scaling and sample complexity compared to vanilla BC with expert
demonstrations. To this end, our algorithm guarantees robustness to corrupted demonstrations at no cost of statistical error. This is the content of Section 4.
• (Empirical support) We validate the predicted robustness and show the effectiveness of our algorithm on different high-dimensional continuous control benchmarks – the vanilla BC is fragile indeed with corrupted demonstrations, and our Robust Behavior Cloning achieves nearly the same performance compared to vanilla BC with expert demonstrations. This is the content of Section 5.
2 PROBLEM SETUP
2.1 REINFORCEMENT LEARNING AND IMITATION LEARNING
Markov Decision Process and Reinforcement Learning. We start the problem setup by introducing the Markov decision process (MDP). An MDP M = 〈S,A, r,P, µ0, γ〉 consists of a state space S, an action space A, an unknown reward function r : S × A → [0,Rmax], an unknown transition kernel P : S × A → ∆(S), an initial state distribution µ0 ∈ ∆(S), and a discounted factor γ ∈ (0, 1). We use ∆ to denote the probability distributions on the simplex. An agent acts in a MDP following a policy π(·|s), which prescribes a distribution over the action space A given each state s ∈ S. Running the policy starting from the initial distribution s1 ∼ µ0 yields a stochastic trajectory T := {st,at, rt}1≤t≤∞, where st,at, rt represent the state, action, reward at time t respectively, with at ∼ π(·|st) and the next state st+1 follows the unknown transition kernel st+1 ∼ P(·|st,at). We denote ρπ,t ∈ ∆(S × A) as the marginal joint stationary distribution for state, action at time step t, and we define ρπ = (1 − γ) ∑∞ i=1 γ
tρπ,t as visitation distribution for policy π. For simplicity, we reuse the notation ρπ(s) = ∫ a∈A ρπ(s, a)da to denote the marginal distribution over state.
The goal of reinforcement learning is to find the best policy π to maximize the expected cumulative return Jπ = ET ∼π [ ∑∞ i=1 γ
trt]. Common RL algorithms (e.g., please refer to Szepesvári (2010)) requires online interaction and exploration with the environments. However, this is prohibited in the offline setting.
Imitation Learning. Imitation learning (IL) aims to obtain a policy to mimic expert’s behavior with demonstration data set D = {(si,ai)}Ni=1 where N is the sample size of D. Note that we do not need any reward signal. Tradition imitation learning assumes perfect (or near-optimal) expert demonstration – for simplification we assume that each state-action pair (si,ai) is drawn from the joint stationary distribution of an expert policy πE :
(si,ai) ∼ ρπE (1)
Learning from demonstrations with or without online interactions has a long history (e.g., Pomerleau (1988); Ho & Ermon (2016)). The goal of offline IL is to learn a policy π̂IL = A(D) through an IL algorithm A, given the demonstration data set D, without further interaction with the unknown true transition dynamic P.
Behavior Cloning. The Behavior Cloning (BC) is the well known algorithm (Pomerleau, 1988) for IL which only uses offline demonstration data without any interaction with the environment. More specifically, BC solves the following Maximum Likelihood Estimation (MLE) problem, which minimizes the average Negative Log-Likelihood (NLL) for all samples in offline demonstrations D:
π̂BC = arg min π∈Π
1
N ∑ (s,a)∈D − log(π(a|s)) (2)
Recent works (Agarwal et al., 2019; Rajaraman et al., 2020; Xu et al., 2021) have shown that BC is optimal under the offline setting, and can only be improved with the knowledge of transition dynamic P in the worst case. Also, another line of research considers improving BC with further online interaction of the environment (Brantley et al., 2019) or actively querying an expert (Ross et al., 2011; Ross & Bagnell, 2014).
2.2 LEARNING FROM CORRUPTED DEMONSTRATIONS
However, it is sometimes unrealistic to assume that the demonstration data set is collected through a presumably optimal expert policy. In this paper, we propose Definition 2.1 for the corrupted demonstrations, which tolerates gross corruption or model mismatch in offline data set.
Definition 2.1 (Corrupted Demonstrations). Let the state-action pair (si,ai)Ni=1 drawn from the joint stationary distribution of a presumably optimal expert policy πE . The corrupted demonstration data D are generated by the following process: an adversary can choose an arbitrary -fraction ( < 0.5) of the samples in [N ] and modifies them with arbitrary values. We note that is a constant independent of the dimensions of the problem. After the corruption, we useD to denote the corrupted demonstration data set.
This corruption process can represent gross corruptions or model mismatch in the demonstration data set. To the best of our knowledge, Definition 2.1 is the first definition for corrupted demonstrations in imitation learning which tolerates arbitrary corruptions.
In the supervised learning, the well-known Huber’s contamination model (Huber (1964)) considers (x, y)
iid∼ (1− )P+ Z,where x ∈ Rd is the explanatory variable (feature) and y ∈ R is the response variable. Here, P denotes the authentic statistical distribution such as Normal mean estimation or linear regression model, and Z denotes the outliers.
Dealing with corrupted x and y in high dimensions has a long history in the robust statistics community (e.g. Rousseeuw, 1984; Chen et al., 2013; 2017; Yin et al., 2018). However, it’s only until recently that robust statistical methods can handle constant -fraction (independent of dimensionality Rd) of outliers in x and y (Klivans et al., 2018; Prasad et al., 2020; Diakonikolas et al., 2019; Liu et al., 2019; 2020; Shen & Sanghavi, 2019; Lugosi & Mendelson, 2019; Lecué & Lerasle, 2020; Jalal et al., 2020). We note that in Imitation Learning, the data collecting process for the demonstrations does not obey i.i.d. assumption in traditional supervised learning due to the temporal dependency.
Notations. Throughout this paper, we use {ci}i=1,2,3 to denote the universal positive constant. We utilize the big-O notation f(n) = O(g(n)) to denote that there exists a positive constant c1 and a natural number n0 such that, for all n ≥ n0, we have f(n) ≤ c1g(n).
3 OUR ALGORITHMS
It is well known that the Median-of-Means (MOM) estimator achieves sub-Gaussian concentration bound for one-dimensional mean estimation even though the underlying distribution only has second moment bound (heavy tailed distribution) (interested readers are referred to textbooks such as Nemirovsky & Yudin (1983); Jerrum et al. (1986); Alon et al. (1999)).
The vanilla MOM estimator for one-dimensional mean estimation works like following: (1) randomly partition N samples into M batches; (2) calculates the mean for each batch; (3) outputs the median of these batch mean. Very recently, MOM estimators are used for high dimensional robust regression (Brownlees et al., 2015; Hsu & Sabato, 2016) by applying MOM estimator on the loss function of empirical risk minimization process.
3.1 ROBUST BEHAVIOR CLONING
Motivated by using MOM estimators on the loss function, we propose Definition 3.1 which uses a MOM objective to handle arbitrary outliers in demonstration data set (s,a) ∈ D. Definition 3.1 (Robust Behavior Cloning). We split the corrupted demonstrationsD intoM batches randomly1: {Bj}Mj=1, with the batch size b ≤ 13 . The Robust Behavior Cloning solves the following optimization
π̂RBC = arg min π∈Π max π′∈Π median 1≤j≤M (`j(π)− `j(π′)) , (3)
1Without loss of generality, we assume that M exactly divides the sample size N , and b = N M is the batch size.
Algorithm 1 Robust Behavior Cloning. 1: Input: Corrupted demonstrations D 2: Output: Robust policy π̂RBC
3: Initialize π and π′. 4: for t = 0 to T − 1, do 5: Randomly partition D to M batches with the batch size b ≤ 13 . 6: For each batch j ∈ [M ], calculate the loss `j(π)− `j(π′) by eq. (4). 7: Pick the batch with median loss within M batches
median 1≤j≤M
(`j(π)− `j(π′)) ,
and evaluate the gradient for π and π′ using back-propagation on that batch (i) perform gradient descent on π. (ii) perform gradient ascent on π′.
8: end for 9: Return: Robust policy π̂RBC = π.
where the loss function `j(π) is the average Negative Log-Likelihood in the batch Bj:
`j(π) = 1
b ∑ (s,a)∈Bj − log(π(a|s)). (4)
The workhorse of Definition 3.1 is eq. (3), which uses a novel variant of Median-of-Means (MOM) tournament procedure (Le Cam (2012); Lugosi & Mendelson (2019); Lecué & Lerasle (2020); Jalal et al. (2020)). In eq. (4), we calculate the average Negative Log-Likelihood (NLL) for a single batch, and π̂RBC is the solution of a min-max formulation based on the batch loss `j(π). Though our algorithm minimizes the robust version of NLL, we do not utilize the traditional iid assumption in the supervised learning.
To gain some intuition of the formulation eq. (3), if we replace the median operator by the mean operator, then RBC is equivalent to BC which just minimizes the empirical average of Negative Log-Likelihood. This is due to the linearity of using the mean operator. However, this is not robust to corrupted demonstration. Hence, we use the median operator on the loss function.
The intuition behind solving this min-max formulation is that the inner variable π′ needs to get close to πE to maximize the difference of loss function, and the outer variable π also need to get close to πE. Hence we can guarantee that π̂RBC will be close to πE. In Section 4, we show that under corrupted demonstrations, π̂RBC in eq. (3) has the same error scaling and sample complexity compared to πE.
In Section 4, we provide rigorous statistical guarantees for Definition 3.1. However, the objective function eq. (3) in Definition 3.1 is not convex (in general), hence we use Algorithm 1 as a computational heuristic to solve it.
In each iteration of Algorithm 1, we randomly partition the demonstration data setD intoM batches, and calculate the loss `j(π)− `j(π′) by eq. (4). We then pick the batch BMed with the median loss, and evaluate the gradient on that batch. We use gradient descent on π for the arg min part and gradient ascent on π′ for the arg max part. In Section 5, we empirically show that this gradientbased heuristic Algorithm 1 is able to minimize this objective and has good convergence properties. As for the time complexity, when using back-propagation on one batch of samples, our RBC incurs overhead costs compared to vanilla BC, in order to evaluate the loss function for all samples via forward propagation.
4 THEORETICAL ANALYSIS
In this section, we provide theoretical guarantees for our RBC algorithm. Since our method (Definition 3.1) directly estimates the conditional probability π(a|s) over the offline demonstrations, our theoretical analysis provides guarantees on Es∼ρπE ∥∥π̂RBC(·|s)− πE(·|s)∥∥2TV, which upper bounds the total variation norm compared to πE under the expectation of s ∼ ρπE . The ultimate goal of the learned policy is to maximize the expected cumulative return, thus we then provide an upper bound for the sub-optimality JπE − Jπ̂RBC . We begin the theoretical analysis by Assumption 4.1, which simplifies our analysis and is common in literature (Agarwal et al., 2019; 2020). By assuming that the policy class Π is discrete, our upper bounds depend on the quantity log(|Π|)/N , which matches the error rates and sample complexity for using BC with expert demonstrations (Agarwal et al., 2019; 2020). Assumption 4.1. We assume that the policy class Π is discrete, and realizable, i.e., πE ∈ Π.
We first present Theorem 4.1, which shows that minimizing the MOM objective via eq. (3) guarantees the closeness of robust policy to optimal policy in total variation distance. Theorem 4.1. Suppose we have corrupted demonstration data set D with sample size N from Definition 2.1, and there exists a constant corruption ratio < 0.5. Under Assumption 4.1, let τ to be the output objective value with π̂RBC in the optimization eq. (3) with the batch size b ≤ 13 , then with probability at least 1− c1δ, we have
Es∼ρπE ∥∥π̂RBC(·|s)− πE(·|s)∥∥2TV = O( log(|Π|/δ)N + τ ) . (5)
The proof is collected in Appendix A. We note that the data collection process does not follow the iid assumption, hence we use martingale analysis similar to (Agarwal et al., 2019; 2020). The first part of eq. (5) is the statistical error log(|Π|/δ)N . The second part is the final objective value in the optimization eq. (3) τ which includes two parts – the first part scales with O( 1b ), which is equivalent to the fraction of corruption O( ). The second part is the sub-optimality gap due to the solving the non-convex optimization.
Our main theorem – Theorem 4.1 – guarantees that a small value of the final objective implies an accurate estimation of policy and hence we can certify estimation quality using the obtained final value of the objective.
Next, we present Theorem 4.2, which guarantees the reward performance of the learned robust policy π̂RBC. Theorem 4.2. Under the same setting as Theorem 4.1, we have
JπE − Jπ̂RBC ≤ O
( 1
(1− γ)2
√ log(|Π|/δ)
N + τ
) , (6)
with probability at least 1− c1δ.
The proof is also collected in Appendix A. We note that the error scaling and sample complexity of Theorem 4.1 and Theorem 4.2 matches the vanilla BC with expert demonstrations (Agarwal et al., 2019; 2020). Remark 4.1. The quadratic dependency on the effective horizon ( 1(1−γ)2 in the discounted setting or H2 in the episodic setting) is widely known as the compounding error or distribution shift in literature, which is due to the essential limitation of offline imitation learning setting. Recent work (Rajaraman et al., 2020; Xu et al., 2021) shows that this quadratic dependency cannot be improved without any further interaction with the environment or the knowledge of transition dynamic P. Hence BC is actually optimal under no-interaction setting. Also, a line of research considers improving BC by further online interaction with the environment or even active query of the experts (Ross et al., 2011; Brantley et al., 2019; Ross & Bagnell, 2014). Since our work, as a robust counterpart of BC, focuses on the robustness to the corruptions in the offline demonstrations setting, it can be naturally used in online the online setting such as DAGGER (Ross et al., 2011) and Brantley et al. (2019).
5 EXPERIMENTS
In this section, we study the empirical performance of our Robust Behavior Cloning. We evaluate the robustness of Robust Behavior Cloning on several continuous control benchmarks simulated by PyBullet Coumans & Bai (2016) simulator: HopperBulletEnv-v0, Walker2DBulletEnv-v0, HalfCheetahBulletEnv-v0 and AntBulletEnv-v0. Actually, these tasks have true reward function already in the simulator. We will use only state observation and action for the imitation algorithm, and we then use the reward to evaluate the obtained policy when running in the simulator.
For each task, we collect the presumably optimal expert trajectories using pre-trained agents from Standard Baselines32. In the experiment, we use Soft Actor-Critic (Haarnoja et al. (2018)) in the Standard Baselines3 pre-trained agents, and we consider it to be an expert.
For the continuous control environments, the action space are bounded. Hence we generate corrupted demonstration data set D as follows: we first randomly choose fraction of samples, and corrupt the action to the boundary (normally−1 or +1). We note that Definition 2.1 allows for arbitrary corruptions, and we choose these outliers’ action since it has the maximum effect, and cannot be easily detected.
We compare our RBC algorithm (Algorithm 1) to a number of natural baselines: the first baseline is directly using BC on the corrupted demonstration D without any robustness consideration. The second one is using BC on the expert demonstrations with the same sample size. In different settings, we fix the policy network as 2 hidden layer feed-forward Neural Network of size {500, 500} with ReLU activation, which is standard in the baselines.
5.1 CONVERGENCE OF OUR ALGORITHM
We illustrate the convergence and the performance of our algorithm to support our theoretical analysis. We track the performance metric of different algorithms vs. epoch number in the whole training process. More specifically, we then evaluate current policy in the simulator for 20 trials, and obtain the mean and standard deviation of cumulative reward for each epoch. This metric corresponds to theoretical bounds in Theorem 4.2.
We focus on four continuous control environments, where the observation space has dimensions around 30, and the action space has boundary [−1, 1]. We fix the sample size as 60000, and vary the corruption fraction to be 10%, 20%. Figure 2 validates our theory that our Robust Behavior Cloning nearly matches the performance of BC on expert demonstrations for different environments and corruption ratio.
5.2 PERFORMANCE UNDER DIFFERENT SETUPS
This is the experiment we have shown in Section 1. In the Lunar Lander control environment, we fix the sample size N = 4000, and then vary the fraction of corruptions . As expected, Figure 1 shows that our RBC is resilient to a constant-fraction of outliers in the demonstrations ranging from 0 to 30%, and it achieves nearly the same performance as the BC on expert demonstrations. In contrast, directly using BC on corrupted demonstrations obtains worse reward performance as the fraction of outliers grows.
6 DISCUSSIONS
6.1 RELATED WORK
Imitation Learning. Behavior Cloning (BC) is the most widely-used imitation learning algorithm (Pomerleau, 1988; Osa et al., 2018) due to its simplicity, effectiveness and scalability, and has been widely used in practice. From a theoretical viewpoint, it has been showed that BC achieves informational optimality in the offline setting (Rajaraman et al., 2020) with no further online interactions or the knowledge of the transition dynamic P.
2The pre-trained agents were cloned from the following repositories: https://github.com/ DLR-RM/stable-baselines3, https://github.com/DLR-RM/rl-baselines3-zoo.
With online interaction, there’s a line of research focusing on improving BC in different scenarios – for example, Ross et al. (2011) proposed DAgger (Data Aggregation) by querying the expert policy in the online setting. Brantley et al. (2019) proposed using an ensemble of BC as uncertainty measure and interacts with the environment to improve BC by taking the uncertainty into account, without the need to query the expert. Very recently, (Xu et al., 2021; Rajaraman et al., 2021) leveraged the knowledge of the transition dynamic P to eliminate compounding error/distribution shift issue in BC.
Besides BC, there are other imitation learning algorithms: Ho & Ermon (2016) used generative adversarial networks for distribution matching to learn a reward function; Reddy et al. (2019) provided a reinforcement learning framework to deal with imitation learning by artificially setting the reward; Ghasemipour et al. (2020) unified several existing imitation learning algorithm as minimizing distribution divergence between learned policy and expert demonstration, just to name a few.
Offline RL. Reinforcement learning leverages the signal from reward function to train the policy. Different from IL, offline RL often does not require the demonstration to be expert demonstration (e.g. Fujimoto et al., 2019; Fujimoto & Gu, 2021; Kumar et al., 2020) (interested readers are referred to (Levine et al., 2020)), and even expects the offline data with higher coverage for different suboptimal policies (Buckman et al., 2020; Jin et al., 2021; Rashidinejad et al., 2021). Behavior-agnostic setting (Nachum et al., 2019; Mousavi et al., 2020) even does not require the collected data from a single policy.
The closest relation between offline RL and IL is the learning of stationary visitation distribution, where learning such visitation distribution does not involve with reward signal, similar to IL. A line of recent research especially for off-policy evaluation tries to learn the stationary visitation distribution of a given target policy (e.g. Liu et al., 2018; Nachum et al., 2019; Tang et al., 2020; Mousavi et al., 2020; Dai et al., 2020). Especially Kostrikov et al. (2020) leverages the off-policy evaluation idea to IL area.
Robustness in IL and RL. There are several recent papers consider corruption-robust in either RL or IL. In RL, Zhang et al. (2021b) considers that the adversarial corruption may corrupt the whole episode in the online RL setting while a more recent one (Zhang et al., 2021a) considers offline RL where -fraction of the whole data set can be replaced by the outliers. However, the dependency scales with the dimension in Zhang et al. (2021a), yet can be a constant in this paper for robust offline IL. Many other papers consider perturbations, heavy tails, or corruptions in either reward function (Bubeck et al., 2013) or in transition dynamic (Xu & Mannor, 2012; Tamar et al., 2014; Roy et al., 2017).
The most related papers follow a similar setting of robust IL are (Wu et al., 2019; Tangkaratt et al., 2020; 2021; Brown et al., 2019; Sasaki & Yamashina, 2020), where they consider imperfect or noisy observations in imitation learning. However, their algorithms cannot handle outliers in the demonstrations, and (Wu et al., 2019; Tangkaratt et al., 2020; 2021) require additional online interactions with the environment. Our algorithm achieves robustness guarantee from purely offline demonstration, without the potentially costly or risky interaction with the real world environment.
6.2 SUMMARY AND FUTURE WORKS
In this paper, we considered the corrupted demonstrations issues in imitation learning, and proposed a novel robust algorithm, Robust Behavior Cloning, to deal with the corruptions in offline demonstration data set. The core technique is replacing the vanilla Maximum Likelihood Estimation with a Median-of-Means (MOM) objective which guarantees the policy estimation and reward performance in the presence of constant fraction of outliers. Our algorithm has strong robustness guarantees and works well in practice.
There are several avenues for future work: since our work focuses on the corruption in offline data set, any improvement in online imitation learning which utilizes Behavior Cloning would benefit from the corruption-robustness guarantees by our offline Robust Behavior Cloning. Also, it would also be of interest to apply our algorithm for real-world environment, such as automated medical diagnosis and autonomous driving.
A PROOFS
The analysis of maximum likelihood estimation is standard in i.i.d. setting for the supervised learning setting (van de Geer, 2000). In our proofs of the robust offline imitation learning algorithm, the analysis for the sequential decision making leverages the martingale analysis technique from (Zhang, 2006; Agarwal et al., 2020).
Our Robust Behavior Cloning (Definition 3.1) solves the following optimization
π̂RBC = arg min π∈Π max π′∈Π median 1≤j≤M (`j(π)− `j(π′)) , (7)
where the loss function `j(π) is the average Negative Log-Likelihood in the batch Bj :
`j(π) = 1
b ∑ (s,a)∈Bj − log(π(a|s)). (8)
This can be understood as a robust counterpart for the maximum likelihood estimation in sequential decision process.
With a slight abuse of notation, we use xi and yi to denote the observation and action, and the underlying unknown expert distribution is yi ∼ p(·|xi) and p(y|x) = f∗(x, y). Following Assumption 4.1, we have the realizable f∗ ∈ F , and the discrete function class satisfies |F| <∞.
Let D denote the data set and let D′ denote a tangent sequence {x′i, y′i} |D| i=1. The tangent sequence is defined as x′i ∼ Di(x1:i−1, y1:i−1) and y′i ∼ p(·|x′i). Note here that x′i follows from the distribution Di, and depends on the original sequence, hence the tangent sequence is independent conditional on D.
For this martingale process, we first introduce a decoupling Lemma from Agarwal et al. (2020).
Lemma A.1. [Lemma 24 in Agarwal et al. (2020)] Let D be a dataset, and let D′ be a tangent sequence. Let Γ(f,D) = ∑ (x,y)∈D φ(f, (x, y)) be any function which can be decomposed additively
across samples inD. Here, φ is any function of f and sample (x, y). Let f̂ = f̂(D) be any estimator taking the dataset D as input and with range F . Then we have
ED [ exp ( Γ(f̂ ,D)− logED′ exp(Γ(f̂ ,D′))− log |F| )] ≤ 1.
Then we present a Lemma which upper bounds the TV distance via a loss function closely related to KL divergence. Such bounds for probabilistic distributions are discussed extensively in literature such as Tsybakov (2009).
Lemma A.2. [Lemma 25 in Agarwal et al. (2020)] For any two conditional probability densities f1, f2 and any state distribution D ∈ ∆(X ) we have
Ex∼D‖f1(x, ·)− f2(x, ·)‖2TV ≤ −2 logEx∼D,y∼f2(·|x) exp ( −1
2 log
f2(x, y)
f1(x, y)
) .
A.1 PROOF OF THEOREM 4.1
With these Lemmas in hand, we are now equipped to prove our main theorem (Theorem 4.1), which guarantees the solution π̂RBC of eq. (3) is close to the optimal policy πE in TV distance.
Theorem A.1 (Theorem 4.1). Suppose we have corrupted demonstration data set D with sample size N from Definition 2.1, and there exists a constant corruption ratio < 0.5. Under Assump-
tion 4.1, let τ to be the output objective value with π̂RBC in the optimization eq. (3) with the batch size b ≤ 13 , then with probability at least 1− c1δ, we have
Es∼ρπE ∥∥π̂RBC(·|s)− πE(·|s)∥∥2TV = O( log(|Π|/δ)N + τ ) .
Proof of Theorem 4.1. En route to the proof of Theorem 4.1, we keep using the notations in Lemma A.1 and Lemma A.2, where the state observation is x, the action is y, and the discrete function class is F .
Similar to Agarwal et al. (2020), we first note that Lemma A.1 can be combined with a simple Chernoff bound to obtain an exponential tail bound. With probability at least 1− c1δ, we have
− logED′ exp(Γ(f̂ ,D′)) ≤ −Γ(f̂ ,D) + log |F|+ log(1/δ). (9)
Our proof technique relies on lower bounding the LHS of eq. (9), and upper bounding the RHS eq. (9).
Let the batch size b ≤ 13 , which is a constant in Definition 3.1, then the number of batches M ≥ 3 N such that there exists at least 66% batches without corruptions.
In the definition of RBC (Definition 3.1), we solve
π̂RBC = arg min π∈Π max π′∈Π median 1≤j≤M (`j(π)− `j(π′)) . (10)
Notice that since πE is one feasible solution of the inner maximization step eq. (10), we can choose π′ = πE. Now we consider the objective function which is the difference of Negative Log-Likelihood between f and f∗, i.e., `j(f)− `j(f∗), defined in eq. (4) where
`j(π) = 1
b ∑ (s,a)∈Bj − log(π(a|s)).
Hence, we choose Γ(f,D) in Lemma A.1 as
Γj(f,D) = N
b ∑ i∈Bj −1 2 log f∗(xi, yi) f(xi, yi)
= N
2b ∑ i∈Bj (log f(xi, yi)− log f∗(xi, yi)) ,
which is the difference of Negative Log-LikelihoodN(`j(f∗)−`j(f))/2 evaluated on a single batch Bj , j ∈ [M ]. This is actually the objective function on a single batch appeared in eq. (3).
Lower bound for the LHS of eq. (9). We apply the concentration bound eq. (9) for such uncorrupted batches, hence the majority of all batches satisfies eq. (9). For those batches, the LHS of
eq. (9) can be lower bounded by the TV distance according to Lemma A.2.
− logED′ exp N b ∑ i∈Bj −1 2 log ( f?(x′i, y ′ i) f̂(x′i, y ′ i) )∣∣D
(i) = −N
b ∑ i∈Bj logEx,y∼Di exp
( −1
2 log
f?(x, y)
f̂(x, y) ) (ii)
≥ N 2b ∑ i∈Bj Ex∼Di ∥∥∥f̂(x, ·)− f?(x, ·)∥∥∥2 TV , (11)
where (i) follows from the independence between f̂ andD′ due to the decoupling technique, and (ii) follows from Lemma A.2, which is an upper bound of the Total Variation distance.
Upper bound for the RHS of eq. (9). Note that the objective is the median of means of each batches and f∗ is one feasible solution of the inner maximization step eq. (10). Since τ is the output objective value with π̂RBC in the optimization eq. (3), this implies that `Med(π)− `Med(π′) ≤ τ for the median batch BMed, which is equivalent to −ΓMed(f,D) ≤ Nτ/2.
Hence for the median batch BMed, the RHS of eq. (9) can be upper bounded by
−ΓMed(f̂ ,D) + log |F|+ log(1/δ) ≤ log |F|+ log(1/δ) +Nτ/2. (12)
Putting together the pieces eq. (11) and eq. (12) for BMed, we have Es∼ρπE ∥∥π̂RBC(·|s)− πE(·|s)∥∥2TV = O( log(|F|/δ)N + τ ) ,
with probability at least 1− c1δ.
A.2 PROOF OF THEOREM 4.2
With the supervised learning guarantees Theorem 4.1 in hand, which provides an upper bound for Es∼ρπE ∥∥π̂RBC(·|s)− πE(·|s)∥∥2TV, we are now able to present the suboptimality guarantee of the reward for π̂RBC. This bound directly corresponds to the reward performance of a policy.
Theorem A.2 (Theorem 4.2). Under the same setting as Theorem 4.1, we have
JπE − Jπ̂RBC ≤ O
( 1
(1− γ)2
√ log(|F|/δ)
N + τ
) ,
with probability at least 1− c1δ.
Proof of Theorem 4.2. This part is similar to Agarwal et al. (2019), and we have
(1− γ)(JπE − Jπ̂RBC) = Es∼ρπE Ea∼πE(·|s)A π̂RBC(s,a)
≤ 1 1− γ √ Es∼ρπE ‖π̂ RBC(·|s)− πE(·|s)‖21
= 2
1− γ
√ Es∼ρπE ‖π̂ RBC(·|s)− πE(·|s)‖2TV,
where we use the fact that sups,a,π|Aπ(s,a)| ≤ 11−γ for the advantage function and the reward is always bounded between 0 and 1.
Combining Theorem 4.1, we have
JπE − Jπ̂RBC ≤ O
( 1
(1− γ)2
√ log(|F|/δ)
N + τ
) ,
with probability at least 1− c1δ. | 1. What is the main contribution of the paper regarding imitation learning?
2. What are the strengths of the proposed approach, particularly in terms of its novelty and theoretical justifications?
3. What are the weaknesses of the paper, especially regarding its experiments and comparisons with other works?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Review | Summary Of The Paper
In this paper, the authors aim to find an offline solution to the imitation learning problem with corrupted demonstration data. They propose a simple robust behavior cloning (RBC) approach based on the Median-of-Means (MOM) objective. The contributions of this work are as follows:
A novel RBC algorithm is proposed with a MOM objective in policy estimation based on behavior cloning.
The authors provided theoretical justifications on the error scaling and sample complexity of the proposed RBC to show that RBC guarantees robustness to corrupted demonstrations at no cost of statistical error.
Experiments on some low-dimensional tasks show the effectiveness of the proposed RBC.
Review
Strength:
The proposed MOM objective for robust imitation learning is novel.
Theoretical justifications are available showing the validity of the proposed RBC.
The paper is well-organized and clearly written.
Weakness:
In the experiments, data corruption changes a sampled action to a boundary value which does not match exactly with the demonstration corruption definition in Def. 2.1. It would be better for the authors to corrupt the actions with an arbitrary value within the action value range.
The experiments focus on tasks with low-dimensional state spaces. Considering that BC also achieves good performance in tasks like Ant, it would be good for the authors to consider tasks with higher-dimensional state spaces. Moreover, there are only two tasks in the experiment section, which does not show a strong validation of the effectiveness of the proposed solution.
The performance of RBC is not convincing in the HalfCheetah task as only 1% of corruption is added. It would be better for the authors to show RBC’s performance on HalfCheetah when the corruption rate increases to 20%, to better show the advantage of the proposed RBC.
Only behavior cloning is compared in the experiment section. Baselines that assume expert demonstrations such as EDM[1] and DFSN[2] should also be compared to show the robustness to data corruption and better show the necessity of the proposed solution.
[1] Jarrett, D., Bica, I., & van der Schaar, M. (2020). Strictly batch imitation learning by energy-based distribution matching. arXiv preprint arXiv:2006.14154. [2] DonghunLee,SrivatsanSrinivasan,andFinaleDoshi-Velez.Trulybatchapprenticeshiplearning with deep successor features. International Joint Conference on Artificial Intelligence (IJCAI), 2019. |
ICLR | Title
Robust Imitation Learning from Corrupted Demonstrations
Abstract
We consider offline Imitation Learning from corrupted demonstrations where a constant fraction of data can be noise or even arbitrary outliers. Classical approaches such as Behavior Cloning assumes that demonstrations are collected by an presumably optimal expert, hence may fail drastically when learning from corrupted demonstrations. We propose a novel robust algorithm by minimizing a Median-of-Means (MOM) objective which guarantees the accurate estimation of policy, even in the presence of constant fraction of outliers. Our theoretical analysis shows that our robust method in the corrupted setting enjoys nearly the same error scaling and sample complexity guarantees as the classical Behavior Cloning in the expert demonstration setting. Our experiments on continuous-control benchmarks validate that existing algorithms are fragile under corrupted demonstration while our method exhibits the predicted robustness and effectiveness.
1 INTRODUCTION
Recent years have witnessed the success of using autonomous agent to learn and adapt to complex tasks and environments in a range of applications such as playing games (e.g. Mnih et al., 2015; Silver et al., 2018; Vinyals et al., 2019), autonomous driving (e.g. Kendall et al., 2019; Bellemare et al., 2020), robotics (Haarnoja et al., 2017), medical treatment (e.g. Yu et al., 2019) and recommendation system and advertisement (e.g. Li et al., 2011; Thomas et al., 2017).
Previous success for sequential decision making often requires two key components: (1) a careful design reward function that can provide the supervision signal during learning and (2) an unlimited number of online interactions with the real-world environment (or a carefully designed simulator) to query new unseen region. However, in many scenarios, both components are not allowed. For example, it is hard to define the reward signal in uncountable many extreme situations in autonomous driving; and it is dangerous and risky to directly deploy a learning policy on human to gather information in autonomous medical treatment (Yu et al., 2019). Therefore an offline sequential decision making algorithm without reward signal is in demand.
Offline Imitation Learning (IL) offers an elegant way to train intelligent agents for complex task without the knowledge of reward functions or using a simulator. Since the offline imitation learning does not interact with the environment, in order to guide intelligent agents to correct behaviors, it is crucial to have high quality expert demonstrations. The well-known Behavior Cloning (BC) algorithm (Pomerleau, 1988) requires that the demonstrations given for training are all presumably optimal and it aims to learn that mapping from state to action from expert demonstration data set.
However in real world scenario, since the demonstration is often collected from human, we cannot guarantee that all the demonstrations we collected have high quality. An human expert can make mistakes by accident or due to the hardness of a complicated scenario (e.g., medical diagnosis). Furthermore, even an expert demonstrates a successful behavior, the recorder or the recording system can have a chance to contaminate the data by accident or on purpose (e.g. Eykholt et al., 2018; Neff & Nagy, 2016).
This leads to the central question of the paper:
Can the optimality assumption on expert demonstrations be weakened or even tolerate arbitrary outliers under offline imitation learning settings?
More concretely, we consider corrupted demonstrations setting where the majority of the demonstration data is collected by an expert policy (presumably optimal), and the remaining data can be even arbitrary outliers (the formal definition is presented in Definition 2.1). This has great significance in many applications, such as automated medical diagnosis for healthcare (Yu et al. (2019)) and autonomous driving (Ma et al., 2018), where the historical data (demonstration) is often complicated and noisy which requires robustness consideration.
However, the classical offline imitation learning approaches such as Behavior Cloning (BC) fails drastically under this corrupted demonstration settings. We illustrated this phenomenon in Figure 1. We use BC on a continuous control environment, and the performance of the policy learned by BC drops drastically as the fraction of corruptions increases in the offline demonstration data set. However, our proposed algorithm – Robust Behavior Cloning (Algorithm 1) – is resilient to corruptions in the offline demonstrations. The detailed experimental setup is included in Section 5. We now summarize our contributions as follows.
1.1 MAIN CONTRIBUTIONS
• (Algorithm) We consider robustness in offline imitation learning where we have corrupted demonstrations. Our definition for corrupted demonstrations significantly weakens the presumably optimal assumption on demonstration data, and can tolerate a constant -fraction of state-action pairs to be arbitrarily corrupted. We refer to Definition 2.1 for a more precise statement. To deal with this issue, we propose a novel algorithm Robust Behavior Cloning (Algorithm 1) for robust imitation learning. Our algorithm works in the offline setting, without any further interaction with the environment. The core ingredient of our robust algorithm is using a novel median of means objective in policy estimation compared to classical Behavior Cloning. Hence, it’s simple to implement, and computationally efficient.
• (Theoretical guarantees) We analyze our Robust Behavior Cloning algorithm when there exists a constant fraction of outliers in the demonstrations under the offline setting. We show that our RBC achieves nearly the same error scaling and sample complexity compared to vanilla BC with expert
demonstrations. To this end, our algorithm guarantees robustness to corrupted demonstrations at no cost of statistical error. This is the content of Section 4.
• (Empirical support) We validate the predicted robustness and show the effectiveness of our algorithm on different high-dimensional continuous control benchmarks – the vanilla BC is fragile indeed with corrupted demonstrations, and our Robust Behavior Cloning achieves nearly the same performance compared to vanilla BC with expert demonstrations. This is the content of Section 5.
2 PROBLEM SETUP
2.1 REINFORCEMENT LEARNING AND IMITATION LEARNING
Markov Decision Process and Reinforcement Learning. We start the problem setup by introducing the Markov decision process (MDP). An MDP M = 〈S,A, r,P, µ0, γ〉 consists of a state space S, an action space A, an unknown reward function r : S × A → [0,Rmax], an unknown transition kernel P : S × A → ∆(S), an initial state distribution µ0 ∈ ∆(S), and a discounted factor γ ∈ (0, 1). We use ∆ to denote the probability distributions on the simplex. An agent acts in a MDP following a policy π(·|s), which prescribes a distribution over the action space A given each state s ∈ S. Running the policy starting from the initial distribution s1 ∼ µ0 yields a stochastic trajectory T := {st,at, rt}1≤t≤∞, where st,at, rt represent the state, action, reward at time t respectively, with at ∼ π(·|st) and the next state st+1 follows the unknown transition kernel st+1 ∼ P(·|st,at). We denote ρπ,t ∈ ∆(S × A) as the marginal joint stationary distribution for state, action at time step t, and we define ρπ = (1 − γ) ∑∞ i=1 γ
tρπ,t as visitation distribution for policy π. For simplicity, we reuse the notation ρπ(s) = ∫ a∈A ρπ(s, a)da to denote the marginal distribution over state.
The goal of reinforcement learning is to find the best policy π to maximize the expected cumulative return Jπ = ET ∼π [ ∑∞ i=1 γ
trt]. Common RL algorithms (e.g., please refer to Szepesvári (2010)) requires online interaction and exploration with the environments. However, this is prohibited in the offline setting.
Imitation Learning. Imitation learning (IL) aims to obtain a policy to mimic expert’s behavior with demonstration data set D = {(si,ai)}Ni=1 where N is the sample size of D. Note that we do not need any reward signal. Tradition imitation learning assumes perfect (or near-optimal) expert demonstration – for simplification we assume that each state-action pair (si,ai) is drawn from the joint stationary distribution of an expert policy πE :
(si,ai) ∼ ρπE (1)
Learning from demonstrations with or without online interactions has a long history (e.g., Pomerleau (1988); Ho & Ermon (2016)). The goal of offline IL is to learn a policy π̂IL = A(D) through an IL algorithm A, given the demonstration data set D, without further interaction with the unknown true transition dynamic P.
Behavior Cloning. The Behavior Cloning (BC) is the well known algorithm (Pomerleau, 1988) for IL which only uses offline demonstration data without any interaction with the environment. More specifically, BC solves the following Maximum Likelihood Estimation (MLE) problem, which minimizes the average Negative Log-Likelihood (NLL) for all samples in offline demonstrations D:
π̂BC = arg min π∈Π
1
N ∑ (s,a)∈D − log(π(a|s)) (2)
Recent works (Agarwal et al., 2019; Rajaraman et al., 2020; Xu et al., 2021) have shown that BC is optimal under the offline setting, and can only be improved with the knowledge of transition dynamic P in the worst case. Also, another line of research considers improving BC with further online interaction of the environment (Brantley et al., 2019) or actively querying an expert (Ross et al., 2011; Ross & Bagnell, 2014).
2.2 LEARNING FROM CORRUPTED DEMONSTRATIONS
However, it is sometimes unrealistic to assume that the demonstration data set is collected through a presumably optimal expert policy. In this paper, we propose Definition 2.1 for the corrupted demonstrations, which tolerates gross corruption or model mismatch in offline data set.
Definition 2.1 (Corrupted Demonstrations). Let the state-action pair (si,ai)Ni=1 drawn from the joint stationary distribution of a presumably optimal expert policy πE . The corrupted demonstration data D are generated by the following process: an adversary can choose an arbitrary -fraction ( < 0.5) of the samples in [N ] and modifies them with arbitrary values. We note that is a constant independent of the dimensions of the problem. After the corruption, we useD to denote the corrupted demonstration data set.
This corruption process can represent gross corruptions or model mismatch in the demonstration data set. To the best of our knowledge, Definition 2.1 is the first definition for corrupted demonstrations in imitation learning which tolerates arbitrary corruptions.
In the supervised learning, the well-known Huber’s contamination model (Huber (1964)) considers (x, y)
iid∼ (1− )P+ Z,where x ∈ Rd is the explanatory variable (feature) and y ∈ R is the response variable. Here, P denotes the authentic statistical distribution such as Normal mean estimation or linear regression model, and Z denotes the outliers.
Dealing with corrupted x and y in high dimensions has a long history in the robust statistics community (e.g. Rousseeuw, 1984; Chen et al., 2013; 2017; Yin et al., 2018). However, it’s only until recently that robust statistical methods can handle constant -fraction (independent of dimensionality Rd) of outliers in x and y (Klivans et al., 2018; Prasad et al., 2020; Diakonikolas et al., 2019; Liu et al., 2019; 2020; Shen & Sanghavi, 2019; Lugosi & Mendelson, 2019; Lecué & Lerasle, 2020; Jalal et al., 2020). We note that in Imitation Learning, the data collecting process for the demonstrations does not obey i.i.d. assumption in traditional supervised learning due to the temporal dependency.
Notations. Throughout this paper, we use {ci}i=1,2,3 to denote the universal positive constant. We utilize the big-O notation f(n) = O(g(n)) to denote that there exists a positive constant c1 and a natural number n0 such that, for all n ≥ n0, we have f(n) ≤ c1g(n).
3 OUR ALGORITHMS
It is well known that the Median-of-Means (MOM) estimator achieves sub-Gaussian concentration bound for one-dimensional mean estimation even though the underlying distribution only has second moment bound (heavy tailed distribution) (interested readers are referred to textbooks such as Nemirovsky & Yudin (1983); Jerrum et al. (1986); Alon et al. (1999)).
The vanilla MOM estimator for one-dimensional mean estimation works like following: (1) randomly partition N samples into M batches; (2) calculates the mean for each batch; (3) outputs the median of these batch mean. Very recently, MOM estimators are used for high dimensional robust regression (Brownlees et al., 2015; Hsu & Sabato, 2016) by applying MOM estimator on the loss function of empirical risk minimization process.
3.1 ROBUST BEHAVIOR CLONING
Motivated by using MOM estimators on the loss function, we propose Definition 3.1 which uses a MOM objective to handle arbitrary outliers in demonstration data set (s,a) ∈ D. Definition 3.1 (Robust Behavior Cloning). We split the corrupted demonstrationsD intoM batches randomly1: {Bj}Mj=1, with the batch size b ≤ 13 . The Robust Behavior Cloning solves the following optimization
π̂RBC = arg min π∈Π max π′∈Π median 1≤j≤M (`j(π)− `j(π′)) , (3)
1Without loss of generality, we assume that M exactly divides the sample size N , and b = N M is the batch size.
Algorithm 1 Robust Behavior Cloning. 1: Input: Corrupted demonstrations D 2: Output: Robust policy π̂RBC
3: Initialize π and π′. 4: for t = 0 to T − 1, do 5: Randomly partition D to M batches with the batch size b ≤ 13 . 6: For each batch j ∈ [M ], calculate the loss `j(π)− `j(π′) by eq. (4). 7: Pick the batch with median loss within M batches
median 1≤j≤M
(`j(π)− `j(π′)) ,
and evaluate the gradient for π and π′ using back-propagation on that batch (i) perform gradient descent on π. (ii) perform gradient ascent on π′.
8: end for 9: Return: Robust policy π̂RBC = π.
where the loss function `j(π) is the average Negative Log-Likelihood in the batch Bj:
`j(π) = 1
b ∑ (s,a)∈Bj − log(π(a|s)). (4)
The workhorse of Definition 3.1 is eq. (3), which uses a novel variant of Median-of-Means (MOM) tournament procedure (Le Cam (2012); Lugosi & Mendelson (2019); Lecué & Lerasle (2020); Jalal et al. (2020)). In eq. (4), we calculate the average Negative Log-Likelihood (NLL) for a single batch, and π̂RBC is the solution of a min-max formulation based on the batch loss `j(π). Though our algorithm minimizes the robust version of NLL, we do not utilize the traditional iid assumption in the supervised learning.
To gain some intuition of the formulation eq. (3), if we replace the median operator by the mean operator, then RBC is equivalent to BC which just minimizes the empirical average of Negative Log-Likelihood. This is due to the linearity of using the mean operator. However, this is not robust to corrupted demonstration. Hence, we use the median operator on the loss function.
The intuition behind solving this min-max formulation is that the inner variable π′ needs to get close to πE to maximize the difference of loss function, and the outer variable π also need to get close to πE. Hence we can guarantee that π̂RBC will be close to πE. In Section 4, we show that under corrupted demonstrations, π̂RBC in eq. (3) has the same error scaling and sample complexity compared to πE.
In Section 4, we provide rigorous statistical guarantees for Definition 3.1. However, the objective function eq. (3) in Definition 3.1 is not convex (in general), hence we use Algorithm 1 as a computational heuristic to solve it.
In each iteration of Algorithm 1, we randomly partition the demonstration data setD intoM batches, and calculate the loss `j(π)− `j(π′) by eq. (4). We then pick the batch BMed with the median loss, and evaluate the gradient on that batch. We use gradient descent on π for the arg min part and gradient ascent on π′ for the arg max part. In Section 5, we empirically show that this gradientbased heuristic Algorithm 1 is able to minimize this objective and has good convergence properties. As for the time complexity, when using back-propagation on one batch of samples, our RBC incurs overhead costs compared to vanilla BC, in order to evaluate the loss function for all samples via forward propagation.
4 THEORETICAL ANALYSIS
In this section, we provide theoretical guarantees for our RBC algorithm. Since our method (Definition 3.1) directly estimates the conditional probability π(a|s) over the offline demonstrations, our theoretical analysis provides guarantees on Es∼ρπE ∥∥π̂RBC(·|s)− πE(·|s)∥∥2TV, which upper bounds the total variation norm compared to πE under the expectation of s ∼ ρπE . The ultimate goal of the learned policy is to maximize the expected cumulative return, thus we then provide an upper bound for the sub-optimality JπE − Jπ̂RBC . We begin the theoretical analysis by Assumption 4.1, which simplifies our analysis and is common in literature (Agarwal et al., 2019; 2020). By assuming that the policy class Π is discrete, our upper bounds depend on the quantity log(|Π|)/N , which matches the error rates and sample complexity for using BC with expert demonstrations (Agarwal et al., 2019; 2020). Assumption 4.1. We assume that the policy class Π is discrete, and realizable, i.e., πE ∈ Π.
We first present Theorem 4.1, which shows that minimizing the MOM objective via eq. (3) guarantees the closeness of robust policy to optimal policy in total variation distance. Theorem 4.1. Suppose we have corrupted demonstration data set D with sample size N from Definition 2.1, and there exists a constant corruption ratio < 0.5. Under Assumption 4.1, let τ to be the output objective value with π̂RBC in the optimization eq. (3) with the batch size b ≤ 13 , then with probability at least 1− c1δ, we have
Es∼ρπE ∥∥π̂RBC(·|s)− πE(·|s)∥∥2TV = O( log(|Π|/δ)N + τ ) . (5)
The proof is collected in Appendix A. We note that the data collection process does not follow the iid assumption, hence we use martingale analysis similar to (Agarwal et al., 2019; 2020). The first part of eq. (5) is the statistical error log(|Π|/δ)N . The second part is the final objective value in the optimization eq. (3) τ which includes two parts – the first part scales with O( 1b ), which is equivalent to the fraction of corruption O( ). The second part is the sub-optimality gap due to the solving the non-convex optimization.
Our main theorem – Theorem 4.1 – guarantees that a small value of the final objective implies an accurate estimation of policy and hence we can certify estimation quality using the obtained final value of the objective.
Next, we present Theorem 4.2, which guarantees the reward performance of the learned robust policy π̂RBC. Theorem 4.2. Under the same setting as Theorem 4.1, we have
JπE − Jπ̂RBC ≤ O
( 1
(1− γ)2
√ log(|Π|/δ)
N + τ
) , (6)
with probability at least 1− c1δ.
The proof is also collected in Appendix A. We note that the error scaling and sample complexity of Theorem 4.1 and Theorem 4.2 matches the vanilla BC with expert demonstrations (Agarwal et al., 2019; 2020). Remark 4.1. The quadratic dependency on the effective horizon ( 1(1−γ)2 in the discounted setting or H2 in the episodic setting) is widely known as the compounding error or distribution shift in literature, which is due to the essential limitation of offline imitation learning setting. Recent work (Rajaraman et al., 2020; Xu et al., 2021) shows that this quadratic dependency cannot be improved without any further interaction with the environment or the knowledge of transition dynamic P. Hence BC is actually optimal under no-interaction setting. Also, a line of research considers improving BC by further online interaction with the environment or even active query of the experts (Ross et al., 2011; Brantley et al., 2019; Ross & Bagnell, 2014). Since our work, as a robust counterpart of BC, focuses on the robustness to the corruptions in the offline demonstrations setting, it can be naturally used in online the online setting such as DAGGER (Ross et al., 2011) and Brantley et al. (2019).
5 EXPERIMENTS
In this section, we study the empirical performance of our Robust Behavior Cloning. We evaluate the robustness of Robust Behavior Cloning on several continuous control benchmarks simulated by PyBullet Coumans & Bai (2016) simulator: HopperBulletEnv-v0, Walker2DBulletEnv-v0, HalfCheetahBulletEnv-v0 and AntBulletEnv-v0. Actually, these tasks have true reward function already in the simulator. We will use only state observation and action for the imitation algorithm, and we then use the reward to evaluate the obtained policy when running in the simulator.
For each task, we collect the presumably optimal expert trajectories using pre-trained agents from Standard Baselines32. In the experiment, we use Soft Actor-Critic (Haarnoja et al. (2018)) in the Standard Baselines3 pre-trained agents, and we consider it to be an expert.
For the continuous control environments, the action space are bounded. Hence we generate corrupted demonstration data set D as follows: we first randomly choose fraction of samples, and corrupt the action to the boundary (normally−1 or +1). We note that Definition 2.1 allows for arbitrary corruptions, and we choose these outliers’ action since it has the maximum effect, and cannot be easily detected.
We compare our RBC algorithm (Algorithm 1) to a number of natural baselines: the first baseline is directly using BC on the corrupted demonstration D without any robustness consideration. The second one is using BC on the expert demonstrations with the same sample size. In different settings, we fix the policy network as 2 hidden layer feed-forward Neural Network of size {500, 500} with ReLU activation, which is standard in the baselines.
5.1 CONVERGENCE OF OUR ALGORITHM
We illustrate the convergence and the performance of our algorithm to support our theoretical analysis. We track the performance metric of different algorithms vs. epoch number in the whole training process. More specifically, we then evaluate current policy in the simulator for 20 trials, and obtain the mean and standard deviation of cumulative reward for each epoch. This metric corresponds to theoretical bounds in Theorem 4.2.
We focus on four continuous control environments, where the observation space has dimensions around 30, and the action space has boundary [−1, 1]. We fix the sample size as 60000, and vary the corruption fraction to be 10%, 20%. Figure 2 validates our theory that our Robust Behavior Cloning nearly matches the performance of BC on expert demonstrations for different environments and corruption ratio.
5.2 PERFORMANCE UNDER DIFFERENT SETUPS
This is the experiment we have shown in Section 1. In the Lunar Lander control environment, we fix the sample size N = 4000, and then vary the fraction of corruptions . As expected, Figure 1 shows that our RBC is resilient to a constant-fraction of outliers in the demonstrations ranging from 0 to 30%, and it achieves nearly the same performance as the BC on expert demonstrations. In contrast, directly using BC on corrupted demonstrations obtains worse reward performance as the fraction of outliers grows.
6 DISCUSSIONS
6.1 RELATED WORK
Imitation Learning. Behavior Cloning (BC) is the most widely-used imitation learning algorithm (Pomerleau, 1988; Osa et al., 2018) due to its simplicity, effectiveness and scalability, and has been widely used in practice. From a theoretical viewpoint, it has been showed that BC achieves informational optimality in the offline setting (Rajaraman et al., 2020) with no further online interactions or the knowledge of the transition dynamic P.
2The pre-trained agents were cloned from the following repositories: https://github.com/ DLR-RM/stable-baselines3, https://github.com/DLR-RM/rl-baselines3-zoo.
With online interaction, there’s a line of research focusing on improving BC in different scenarios – for example, Ross et al. (2011) proposed DAgger (Data Aggregation) by querying the expert policy in the online setting. Brantley et al. (2019) proposed using an ensemble of BC as uncertainty measure and interacts with the environment to improve BC by taking the uncertainty into account, without the need to query the expert. Very recently, (Xu et al., 2021; Rajaraman et al., 2021) leveraged the knowledge of the transition dynamic P to eliminate compounding error/distribution shift issue in BC.
Besides BC, there are other imitation learning algorithms: Ho & Ermon (2016) used generative adversarial networks for distribution matching to learn a reward function; Reddy et al. (2019) provided a reinforcement learning framework to deal with imitation learning by artificially setting the reward; Ghasemipour et al. (2020) unified several existing imitation learning algorithm as minimizing distribution divergence between learned policy and expert demonstration, just to name a few.
Offline RL. Reinforcement learning leverages the signal from reward function to train the policy. Different from IL, offline RL often does not require the demonstration to be expert demonstration (e.g. Fujimoto et al., 2019; Fujimoto & Gu, 2021; Kumar et al., 2020) (interested readers are referred to (Levine et al., 2020)), and even expects the offline data with higher coverage for different suboptimal policies (Buckman et al., 2020; Jin et al., 2021; Rashidinejad et al., 2021). Behavior-agnostic setting (Nachum et al., 2019; Mousavi et al., 2020) even does not require the collected data from a single policy.
The closest relation between offline RL and IL is the learning of stationary visitation distribution, where learning such visitation distribution does not involve with reward signal, similar to IL. A line of recent research especially for off-policy evaluation tries to learn the stationary visitation distribution of a given target policy (e.g. Liu et al., 2018; Nachum et al., 2019; Tang et al., 2020; Mousavi et al., 2020; Dai et al., 2020). Especially Kostrikov et al. (2020) leverages the off-policy evaluation idea to IL area.
Robustness in IL and RL. There are several recent papers consider corruption-robust in either RL or IL. In RL, Zhang et al. (2021b) considers that the adversarial corruption may corrupt the whole episode in the online RL setting while a more recent one (Zhang et al., 2021a) considers offline RL where -fraction of the whole data set can be replaced by the outliers. However, the dependency scales with the dimension in Zhang et al. (2021a), yet can be a constant in this paper for robust offline IL. Many other papers consider perturbations, heavy tails, or corruptions in either reward function (Bubeck et al., 2013) or in transition dynamic (Xu & Mannor, 2012; Tamar et al., 2014; Roy et al., 2017).
The most related papers follow a similar setting of robust IL are (Wu et al., 2019; Tangkaratt et al., 2020; 2021; Brown et al., 2019; Sasaki & Yamashina, 2020), where they consider imperfect or noisy observations in imitation learning. However, their algorithms cannot handle outliers in the demonstrations, and (Wu et al., 2019; Tangkaratt et al., 2020; 2021) require additional online interactions with the environment. Our algorithm achieves robustness guarantee from purely offline demonstration, without the potentially costly or risky interaction with the real world environment.
6.2 SUMMARY AND FUTURE WORKS
In this paper, we considered the corrupted demonstrations issues in imitation learning, and proposed a novel robust algorithm, Robust Behavior Cloning, to deal with the corruptions in offline demonstration data set. The core technique is replacing the vanilla Maximum Likelihood Estimation with a Median-of-Means (MOM) objective which guarantees the policy estimation and reward performance in the presence of constant fraction of outliers. Our algorithm has strong robustness guarantees and works well in practice.
There are several avenues for future work: since our work focuses on the corruption in offline data set, any improvement in online imitation learning which utilizes Behavior Cloning would benefit from the corruption-robustness guarantees by our offline Robust Behavior Cloning. Also, it would also be of interest to apply our algorithm for real-world environment, such as automated medical diagnosis and autonomous driving.
A PROOFS
The analysis of maximum likelihood estimation is standard in i.i.d. setting for the supervised learning setting (van de Geer, 2000). In our proofs of the robust offline imitation learning algorithm, the analysis for the sequential decision making leverages the martingale analysis technique from (Zhang, 2006; Agarwal et al., 2020).
Our Robust Behavior Cloning (Definition 3.1) solves the following optimization
π̂RBC = arg min π∈Π max π′∈Π median 1≤j≤M (`j(π)− `j(π′)) , (7)
where the loss function `j(π) is the average Negative Log-Likelihood in the batch Bj :
`j(π) = 1
b ∑ (s,a)∈Bj − log(π(a|s)). (8)
This can be understood as a robust counterpart for the maximum likelihood estimation in sequential decision process.
With a slight abuse of notation, we use xi and yi to denote the observation and action, and the underlying unknown expert distribution is yi ∼ p(·|xi) and p(y|x) = f∗(x, y). Following Assumption 4.1, we have the realizable f∗ ∈ F , and the discrete function class satisfies |F| <∞.
Let D denote the data set and let D′ denote a tangent sequence {x′i, y′i} |D| i=1. The tangent sequence is defined as x′i ∼ Di(x1:i−1, y1:i−1) and y′i ∼ p(·|x′i). Note here that x′i follows from the distribution Di, and depends on the original sequence, hence the tangent sequence is independent conditional on D.
For this martingale process, we first introduce a decoupling Lemma from Agarwal et al. (2020).
Lemma A.1. [Lemma 24 in Agarwal et al. (2020)] Let D be a dataset, and let D′ be a tangent sequence. Let Γ(f,D) = ∑ (x,y)∈D φ(f, (x, y)) be any function which can be decomposed additively
across samples inD. Here, φ is any function of f and sample (x, y). Let f̂ = f̂(D) be any estimator taking the dataset D as input and with range F . Then we have
ED [ exp ( Γ(f̂ ,D)− logED′ exp(Γ(f̂ ,D′))− log |F| )] ≤ 1.
Then we present a Lemma which upper bounds the TV distance via a loss function closely related to KL divergence. Such bounds for probabilistic distributions are discussed extensively in literature such as Tsybakov (2009).
Lemma A.2. [Lemma 25 in Agarwal et al. (2020)] For any two conditional probability densities f1, f2 and any state distribution D ∈ ∆(X ) we have
Ex∼D‖f1(x, ·)− f2(x, ·)‖2TV ≤ −2 logEx∼D,y∼f2(·|x) exp ( −1
2 log
f2(x, y)
f1(x, y)
) .
A.1 PROOF OF THEOREM 4.1
With these Lemmas in hand, we are now equipped to prove our main theorem (Theorem 4.1), which guarantees the solution π̂RBC of eq. (3) is close to the optimal policy πE in TV distance.
Theorem A.1 (Theorem 4.1). Suppose we have corrupted demonstration data set D with sample size N from Definition 2.1, and there exists a constant corruption ratio < 0.5. Under Assump-
tion 4.1, let τ to be the output objective value with π̂RBC in the optimization eq. (3) with the batch size b ≤ 13 , then with probability at least 1− c1δ, we have
Es∼ρπE ∥∥π̂RBC(·|s)− πE(·|s)∥∥2TV = O( log(|Π|/δ)N + τ ) .
Proof of Theorem 4.1. En route to the proof of Theorem 4.1, we keep using the notations in Lemma A.1 and Lemma A.2, where the state observation is x, the action is y, and the discrete function class is F .
Similar to Agarwal et al. (2020), we first note that Lemma A.1 can be combined with a simple Chernoff bound to obtain an exponential tail bound. With probability at least 1− c1δ, we have
− logED′ exp(Γ(f̂ ,D′)) ≤ −Γ(f̂ ,D) + log |F|+ log(1/δ). (9)
Our proof technique relies on lower bounding the LHS of eq. (9), and upper bounding the RHS eq. (9).
Let the batch size b ≤ 13 , which is a constant in Definition 3.1, then the number of batches M ≥ 3 N such that there exists at least 66% batches without corruptions.
In the definition of RBC (Definition 3.1), we solve
π̂RBC = arg min π∈Π max π′∈Π median 1≤j≤M (`j(π)− `j(π′)) . (10)
Notice that since πE is one feasible solution of the inner maximization step eq. (10), we can choose π′ = πE. Now we consider the objective function which is the difference of Negative Log-Likelihood between f and f∗, i.e., `j(f)− `j(f∗), defined in eq. (4) where
`j(π) = 1
b ∑ (s,a)∈Bj − log(π(a|s)).
Hence, we choose Γ(f,D) in Lemma A.1 as
Γj(f,D) = N
b ∑ i∈Bj −1 2 log f∗(xi, yi) f(xi, yi)
= N
2b ∑ i∈Bj (log f(xi, yi)− log f∗(xi, yi)) ,
which is the difference of Negative Log-LikelihoodN(`j(f∗)−`j(f))/2 evaluated on a single batch Bj , j ∈ [M ]. This is actually the objective function on a single batch appeared in eq. (3).
Lower bound for the LHS of eq. (9). We apply the concentration bound eq. (9) for such uncorrupted batches, hence the majority of all batches satisfies eq. (9). For those batches, the LHS of
eq. (9) can be lower bounded by the TV distance according to Lemma A.2.
− logED′ exp N b ∑ i∈Bj −1 2 log ( f?(x′i, y ′ i) f̂(x′i, y ′ i) )∣∣D
(i) = −N
b ∑ i∈Bj logEx,y∼Di exp
( −1
2 log
f?(x, y)
f̂(x, y) ) (ii)
≥ N 2b ∑ i∈Bj Ex∼Di ∥∥∥f̂(x, ·)− f?(x, ·)∥∥∥2 TV , (11)
where (i) follows from the independence between f̂ andD′ due to the decoupling technique, and (ii) follows from Lemma A.2, which is an upper bound of the Total Variation distance.
Upper bound for the RHS of eq. (9). Note that the objective is the median of means of each batches and f∗ is one feasible solution of the inner maximization step eq. (10). Since τ is the output objective value with π̂RBC in the optimization eq. (3), this implies that `Med(π)− `Med(π′) ≤ τ for the median batch BMed, which is equivalent to −ΓMed(f,D) ≤ Nτ/2.
Hence for the median batch BMed, the RHS of eq. (9) can be upper bounded by
−ΓMed(f̂ ,D) + log |F|+ log(1/δ) ≤ log |F|+ log(1/δ) +Nτ/2. (12)
Putting together the pieces eq. (11) and eq. (12) for BMed, we have Es∼ρπE ∥∥π̂RBC(·|s)− πE(·|s)∥∥2TV = O( log(|F|/δ)N + τ ) ,
with probability at least 1− c1δ.
A.2 PROOF OF THEOREM 4.2
With the supervised learning guarantees Theorem 4.1 in hand, which provides an upper bound for Es∼ρπE ∥∥π̂RBC(·|s)− πE(·|s)∥∥2TV, we are now able to present the suboptimality guarantee of the reward for π̂RBC. This bound directly corresponds to the reward performance of a policy.
Theorem A.2 (Theorem 4.2). Under the same setting as Theorem 4.1, we have
JπE − Jπ̂RBC ≤ O
( 1
(1− γ)2
√ log(|F|/δ)
N + τ
) ,
with probability at least 1− c1δ.
Proof of Theorem 4.2. This part is similar to Agarwal et al. (2019), and we have
(1− γ)(JπE − Jπ̂RBC) = Es∼ρπE Ea∼πE(·|s)A π̂RBC(s,a)
≤ 1 1− γ √ Es∼ρπE ‖π̂ RBC(·|s)− πE(·|s)‖21
= 2
1− γ
√ Es∼ρπE ‖π̂ RBC(·|s)− πE(·|s)‖2TV,
where we use the fact that sups,a,π|Aπ(s,a)| ≤ 11−γ for the advantage function and the reward is always bounded between 0 and 1.
Combining Theorem 4.1, we have
JπE − Jπ̂RBC ≤ O
( 1
(1− γ)2
√ log(|F|/δ)
N + τ
) ,
with probability at least 1− c1δ. | 1. What is the focus of the paper regarding offline imitation learning?
2. What are the strengths of the proposed algorithm, particularly in terms of its theoretical analysis?
3. What are the weaknesses of the paper, especially regarding its empirical evaluation and comparisons with other works?
4. How does the reviewer assess the clarity and ease of reading the paper's content?
5. Are there any questions regarding the tradeoff between the fraction of corrupted data points and the number of batches needed for ensuring the algorithm's effectiveness? | Summary Of The Paper
Review | Summary Of The Paper
This paper considers an offline imitation learning task wherein a constant
ε
-fraction of demonstrations have been potentially arbitrarily corrupted. This data corruption is motivated by sub-optimal experts and/or erroneous/adversarial sensors. In order to address this, the paper proposes minimizing a Median-of-Means (MOM) objective. Theoretically, it is shown that up to constant factors, the resulting learned policy achieves the same statistical rates as behavior cloning, which was shown to be optimal for the offline imitation learning setting in recent work by Agarwal et al. This analysis is done under the assumption of a discrete policy class containing the expert policy. Algorithmically, an approach based on alternating gradient descent/ascent is proposed, and it is shown on two continuous control benchmarks in PyBullet (LunarLanderContinuous-v2 and HalfCheetahBulletEnv0) that the proposed approach compares favorably to a behavior cloning as applied to uncorrupted data, whereas vanilla behavior cloning performs very poorly.
Review
Strengths:
The paper considers a problem of relevance to the ICLR community, and proposes what appears to be a novel algorithm with both empirical and theoretical support.
I found the paper very clear and easy to read
The theoretical results are tight, in that they match the optimal statistical rates achievable in offline IL.
Weaknesses:
Theorems 4.1 and 4.2 have no explicit dependence on the fraction of corrupted data points
ε
and/or the number of batches
M
needed to ensure that at least 50% of the batches are not corrupted (which is assumed in the proof). Presumably there is a tradeoff/dependence here. In the extreme case where
ε
≈
.5
, it would seem that each "batch" would be of size 1, at which point it isn't clear if/how the algorithm would work. Some more explicit delineations of these constants would be very useful.
The empirical evaluation was somewhat underwhelming: I would have expected to see several more environments tested.
No comparison to other robust imitation learning algorithms were provided, and so it is unclear if the algorithm is doing well, or if the tasks are easy enough that any sensible approach to limiting the effects of outliers would do well. If the reason is that no other offline robust IL algorithms exist, then why not compare DAGGER w/ a MOM cost (which the authors repeatedly state is easy to implement) to some of the interactive robust IL methods outlined in the related work section. At the very least, comparisons to more sophisticated offline IL methods should be done, e.g., the method proposed in Ho and Ermon 2016. Statistical optimality and practical performance are not always the same, and so only comparing to BC is not as strong as suggested by the results as Agarwal et al. |
ICLR | Title
Critical Learning Periods in Deep Networks
Abstract
Similar to humans and animals, deep artificial neural networks exhibit critical periods during which a temporary stimulus deficit can impair the development of a skill. The extent of the impairment depends on the onset and length of the deficit window, as in animal models, and on the size of the neural network. Deficits that do not affect low-level statistics, such as vertical flipping of the images, have no lasting effect on performance and can be overcome with further training. To better understand this phenomenon, we use the Fisher Information of the weights to measure the effective connectivity between layers of a network during training. Counterintuitively, information rises rapidly in the early phases of training, and then decreases, preventing redistribution of information resources in a phenomenon we refer to as a loss of “Information Plasticity”. Our analysis suggests that the first few epochs are critical for the creation of strong connections that are optimal relative to the input data distribution. Once such strong connections are created, they do not appear to change during additional training. These findings suggest that the initial learning transient, under-scrutinized compared to asymptotic behavior, plays a key role in determining the outcome of the training process. Our findings, combined with recent theoretical results in the literature, also suggest that forgetting (decrease of information in the weights) is critical to achieving invariance and disentanglement in representation learning. Finally, critical periods are not restricted to biological systems, but can emerge naturally in learning systems, whether biological or artificial, due to fundamental constrains arising from learning dynamics and information processing.
1 INTRODUCTION
Critical periods are time windows of early post-natal development during which sensory deficits can lead to permanent skill impairment (Kandel et al., 2013). Researchers have documented critical periods affecting a range of species and systems, from visual acuity in kittens (Wiesel & Hubel, 1963b; Wiesel, 1982) to song learning in birds (Konishi, 1985). Uncorrected eye defects (e.g., strabismus, cataracts) during the critical period for visual development lead to amblyopia in one in fifty adults.
The cause of critical periods is ascribed to the biochemical modulation of windows of neuronal plasticity (Hensch, 2004). In this paper, however, we show that deep neural networks (DNNs), while completely devoid of such regulations, respond to sensory deficits in ways similar to those observed in humans and animal models. This surprising result suggests that critical periods may arise from information processing, rather than biochemical, phenomena.
We propose using the information in the weights, measured by an efficient approximation of the Fisher Information, to study critical period phenomena in DNNs. We show that, counterintuitively, the information in the weights does not increase monotonically during training. Instead, a rapid growth in information (“memorization phase”) is followed by a reduction of information (“reorganization” or “forgetting” phase), even as classification performance keeps increasing. This behavior is consistent across different tasks and network architectures. Critical periods are centered in the memorization phase.
∗These authors contributed equally to this work.
Published as a conference paper at ICLR 2019
Under review as a conference paper at ICLR 2019
Figure 1: DNNs exhibit critical periods. (A) Final accuracy achieved by a CNN trained with a cataract-like deficit as a function of the training epoch N at which deficit is removed (solid line). Performance is permanently impaired if the deficit is not corrected early enough, regardless of how much additional training is performed. As in animal models, critical periods coincide with the early learning phase during which test accuracy would rapidly increase in the absence of deficits (dashed). (B) For comparison, we report acuity for kittens monocularly deprived since birth and tested at the time of eye-opening (solid), and normal development visual acuity in kittens as a function of age (dashed) (Giffin & Mitchell, 1978; Mitchell, 1988).
artificial neural networks (ANNs) are only loosely inspired by biological systems (Hassabis et al., 2017).
Most studies to date have focused either on the behavior of networks at convergence (Representation
Learning) or on the asymptotic properties of the numerical scheme used to get there (Optimization).
The role of the initial transient, especially its effect in biasing the network towards “good” regions
of the complex and high-dimensional optimization problem, is rarely addressed. To study this initial learning phase of ANNs, we replicate experiments performed in animal models and find that the responses to early deficits are remarkably similar, despite the large underlying differences between the two systems. In particular, we show that the quality of the solution depends only minimally on the final, relatively well-understood, phase of the training process or on its very first epochs; instead, it depends critically on the period prior to initial convergence.
In animals, sensory deficits introduced during critical periods induce changes in the architecture of the corresponding areas (Daw, 2014; Wiesel & Hubel, 1963a; Hendrickson et al., 1987). To determine whether a similar phenomenon exists in ANNs, we compute the Fisher Information of the weights of the network as a proxy to measure its “effective connectivity”, that is, the density of connections that are effectively used by the network in order to solve the task. Like others before us (Shwartz-Ziv & Tishby, 2017), we observe two distinct phases during the training, first a “learning phase” in which the Fisher Information of the weights increases as the network learns from the data, followed by a “consolidation” or “compression” phase in which the Fisher Information decreases and stabilizes. Sensitivity to critical-period-inducing deficits is maximal exactly when the Fisher Information peaks.
A layer-wise analysis of the network’s effective connectivity shows that, in the tasks and deficits we consider, the hierarchy of low-level and high-level features in the training data is a key aspect behind the observed phenomena. In particular, our experiments suggest that the existence of critical periods in deep neural networks depends on the inability of the network to change its effective connectivity pattern in order to process different information (in response to deficit removal). We call this phenomenon, which is not mediated by any external factors, a loss of the “Information Plasticity” of the network.
2 RELATED WORK
3 DEEP ARTIFICIAL NEURAL NETWORKS EXHIBIT CRITICAL PERIODS
A notable example of critical period-inducing deficit, which also commonly affects humans, is amblyopia (reduced visual acuity in one eye) caused unilateral cataracts during infancy or childhood (Vaegan & Taylor, 1979; von Noorden, 1981): Even after surgical correction of the cataracts, the
Under review as a conference paper at ICLR 2019
Figure 2: Sensitivity of learning phase: (C) Final test accuracy of a DNN as a function of the onset
of a short 40-epoch deficit. The decrease in the final performance can be used to measure the sensitivity to deficits. The most sensitive epochs corresponds to the early rapid learning phase, before the test error (dashed line) begins to plateau. Afterwards, the network is largely unaffected by the temporary deficit. (D) This can be compared with changes in the degree of functional disconnection (normalized numbers of V1 monocular cells disconnected from the contralateral eye) as a function of the kittens’ age at the onset of a 10-12-day deficit window (Olson & Freeman, 1980). Dashed lines are as in A and B respectively.
ability of the patients to regain normal acuity in the affected eye depends both on the duration of the deficit and on its age of onset, with earlier and longer deficits causing more severe effects.
In order to replicate this experimental setup in ANNs, we train a standard convolutional network (CNN) to classify objects in small 32 ⇥ 32 RGB images from the CIFAR-10 dataset (Krizhevsky & Hinton, 2009) in 10 classes. To simulate the effect of cataracts, for the first t0 epochs the images in the dataset are downsampled to 8⇥8 and then upsampled back to 32⇥32 using bilinear interpolation, in practice blurring the image and destroying small-scale details.1 After that, the training continues for 300 more epochs, giving the network enough time to converge and ensuring it is exposed to the same number of uncorrupted images as in the control (t0 = 0) experiment.
In Figure 1, we graph the final performance of the network (described in Materials and Methods) as a function of the epoch at which the deficit is corrected (t0).We clearly observe the existence of a
critical period for this deficit in the ANN: if the blur is not removed within the first 60 epochs, the
final performance is severely decreased when compared to the baseline (from a test error of ⇠6.4%,
[In this plot it is 8%, it is 6.4% for th resnet later. We can swap the plots or change the text] in the
absence of a deficit, to more than 18% when the blur is present over 140 epochs, a ⇠300% increase).
The profile of the curve is also strikingly similar to the one obtained in kittens monocularly deprived
from near birth and whose visual acuity upon eye-opening was tested and plotted against the length
of the eficit window (Mitchell, 1988). Just like in human and animal models (where critical
periods are characteristic of early development), the critical period in the DNN also arises during
the initial rapid learning phase. At this stage, the network is quickly learning a solution before the
test error plateaus and the longer asymptotic convergence phase begins.
Sensitivity to deficit. To quantify more accurately the sensitivity of the ANN to image blurring throughout its early learning phase, we introduced the deficit in a short constant window (40 epochs), starting at different epochs, and then measured the decrease in the ANN’s final performance compared to the baseline. In Figure 2, we plot the final testing error of the network against the epoch of onset of the deficit. We observe that the network’s sensitivity to blurring peaks in the central part of the early rapid learning phase (around 30 epochs), while later deficits produce little or no effect. A similar experiment was also performed kittens by Olson and Freeman, using a window of 10-12 days during which the animals were monocularly deprived and using it to “scan” the first 4 months after birth to obtain a sensitivity profile (Olson & Freeman, 1980).
We subsequently evaluated the effect of other training data modifications: a more drastic deprivation where the input is substituted with random noise, simulating complete sensory deprivation, and two “high-level” modifications of the training data: vertical flipping of the input image and permutation
1We employed this method, instead of a simpler Gaussian blur, since it has a very similar effect and makes the quantification of information loss clearer.
3
Figure 1: DNNs exhibit critical periods. (A) Final accuracy achieved by a CNN trained with a cataract-like deficit as a function of the training epoch N at which the deficit is removed (solid line). Performance is permanently impaired if the fici is no corrected early enough, r gardless of how much additional training is performed. As in animal models, critical periods coincide with the early learning phase during which, in the absence of deficits, test accuracy would rapidly increase (dashed). (B) For comparison, we r p rt acuity fo kittens monocularly d p ived since birth and tested at the time of eye-opening (solid), and normal visual acuity development (in kittens) as a function of their age (dashed) (Giffin & Mitchell, 1978; Mitchell, 1988). Sensitivity during learning: (C) Final test accuracy of a DNN as a function of the onset of a short 40-epoch deficit. The decrease in the final performance can be used to measure t e sensi ivity to defi ts. The most sensitive epochs corresponds to the early rapid learning phase, before the test error (dashed line) begins to plateau. Afterwards, the network is largely unaffected by the temporary deficit. (D) This can be compared with changes in the degr of functio al disconnection (normalized numbers of V1 monocular cells disconnected from the contralateral eye) as a function of the kittens’ age at the onset of a 10-12-day deficit window (Olson & Freeman, 1980). Dashed lines are as in A and B respectively, up to a re-scaling of the y-axis.
Our findings, described in Section 2, indicate that the early tra sient is critic l in determining the final solution of the optimization associated with training an artificial neural network. In particular, the effects of sensory deficits during a critical period cannot be overcome, no matter how much additional training is performed. Yet most theoretical studies have focused on the network behavior after converge ce (Representation Learning) or on the asymptotic properties of the optimization scheme used for training (SG ).
To study this early phase, in Section 3, we use the Fisher Information to quantify the effective connectivity of a network during training, and introduce the notion of Information Plasticity in learning. Information Plasticity is maximal during the memorization phase, and decreases in the reorganization phase. We show that deficit sensitivity during critical periods correlates strongly with the effective connectivity.
In Section 4 we discuss our contribution in relation to previous work. When considered in conjunction with recent results on representation learning (Achille & Soatto, 2018), our findings indicate that forgetting (reducing information in the weights) is critical to achieving invariance to nuisance variability as well as independence of the components of the representation, but comes at the price of reduced adaptability later in the training. We also hypothesize that the loss of physical connectivity in biology (neural plasticity) could be a consequence, rather than a cause, of the loss of Information Plasticity, which depends on how the information is distributed throughout a network during the early stages of learning. These results also shed light on the common practice of pre-training a model on a task and then fine-tune it for another, one of the most rudimentary forms of transfer learning. Our experiments show that, rather than helpful, pre-training can be detrimental, even if the tasks are similar (e.g., same labels, slightly blurred images).
2 EXPERIMENTS
notable e a l f iti related deficit, commonly affecting humans, is amblyopia (reduced visual a uity in one eye) caused b cat racts during infancy or childhood (Taylor et al., 1979;
2
von Noorden, 1981). Even after surgical correction of cataracts, the ability of the patients to regain normal acuity in the affected eye depends both on the duration of the deficit and on its age of onset, with earlier and longer deficits causing more severe effects. In this section, we aim to study the effects of similar deficits in DNNs. To do so, we train a standard All-CNN architecture based on Springenberg et al. (2014) (see Appendix A) to classify objects in small 32 × 32 images from the CIFAR-10 dataset (Krizhevsky & Hinton, 2009). We train with SGD using an exponential annealing schedule for the learning rate. To simulate the effect of cataracts, for the first t0 epochs the images in the dataset are downsampled to 8 × 8 and then upsampled back to 32 × 32 using bilinear interpolation, in practice blurring the image and destroying small-scale details.1 After that, the training continues for 160 more epochs, giving the network time to converge and ensuring it is exposed to the same number of uncorrupted images as in the control (t0 = 0) experiment.
DNNs exhibit critical periods: In Figure 1, we plot the final performance of a network affected by the deficit as a function of the epoch t0 at which the deficit is corrected. We can readily observe the existence of a critical period: If the blur is not removed within the first 40-60 epochs, the final performance is severely decreased when compared to the baseline (up to a threefold increase in error). The decrease in performance follows trends commonly observed in animals, and may be qualitatively compared, for example, to the loss of visual acuity observed in kittens monocularly deprived from birth as a function of the length of the deficit (Mitchell, 1988).2
We can measure more accurately the sensitivity to a blur deficit during learning by introducing the deficit in a short window of constant length (40 epochs), starting at different epochs, and then measure the decrease in the DNN’s final performance compared to the baseline (Figure 1). Doing this, we observe that the sensitivity to the deficit peaks in the central part of the early rapid learning phase (at around 30 epochs), while introducing the deficit later produces little or no effect. A similar experiment performed on kittens, using a window of 10-12 days during which the animals are monocularly deprived, again shows a remarkable similarity between the profiles of the sensitivity curves (Olson & Freeman, 1980).
High-level deficits are not associated with a critical period: A natural question is whether any change in the input data distribution will have a corresponding critical period for learning. This is not the case for neuronal networks, which remain plastic enough to adapt to high-level changes in sensory processing (Daw, 2014). For example, it is well-reported that even adult humans can rapidly adapt to certain drastic changes, such as the inversion of the visual field (Stratton, 1896; Kohler, 1964). In Figure 2, we observe that DNNs are also largely unaffected by high-level deficits – such as vertical flipping of the image, or random permutation of the output labels: After deficit correction, the network quickly recovers its baseline performance. This hints at a finer interplay between the
1We employed this method, instead of a simpler Gaussian blur, since it has a very similar effect and makes the quantification of information loss clearer.
2See Appendix C for details on how to compare different models and deficits.
structure of the data distribution and the optimization algorithm, resulting in the existence of a critical period.
Sensory deprivation: We now apply to the network a more drastic deficit, where each image is replaced by white noise. Figure 2 shows hows this extreme deficit exhibits a remarkably less severe effect than the one obtained by only blurring images: Training the network with white noise does not provide any information on the natural images, and results in milder effects than those caused by a deficit (e.g., image blur), which instead conveys some information, but leads the network to (incorrectly) learn that no fine structure is present in the images. A similar effect has been observed in animals, where a period of early sensory deprivation (dark-rearing) can lengthen the critical period and thus cause less severe effects than those documented in light-reared animals (Mower, 1991). We refer the reader to Appendix C for a more detailed comparison between sensory deprivation and training on white noise.
Architecture, depth, and learning rate annealing: Figure 3 shows that a fully-connected network trained on the MNIST digit classification dataset also shows a critical period for the image blur deficit. Therefore, the convolutional structure is not necessary, nor is the use of natural images. Similarly, a ResNet-18 trained on CIFAR-10 also has a critical period, which is also remarkably sharper than the one found in a standard convolutional network (Figure 1). This is especially interesting, since ResNets allow for easier backpropagation of gradients to the lower layers, thus suggesting that the critical period is not caused by vanishing gradients. However, Figure 2 (Right) shows that the presence of a critical period does indeed depend critically on the depth of the network. In Figure 3, we confirm that a critical period exists even when the network is trained with a constant learning rate, and therefore cannot be explained by an annealed learning rate in later epochs.
Optimization method and weight decay: Figure 3 (Bottom Right) shows that when using Adam as the optimization scheme, which renormalizes the gradients using a running mean of their first two moments, we still observe a critical period similar to that of standard SGD. However, changing the
hyperparameters of the optimization can change the shape of the critical period: In Figure 3 (Bottom Left) we show that increasing weight decay makes critical periods longer and less sharp. This can be explained as it both slows the convergence of the network, and it limits the ability of higher layers to change to overcome the deficit, thus encouraging lower layers to also learn new features.
3 FISHER INFORMATION ANALYSIS
We have established empirically that, in animals and DNNs alike, the initial phases of training are critical to the outcome of the training process. In animals, this strongly relates to changes in the brain architecture of the areas associated with the deficit (Daw, 2014). This is inevitably different in artificial networks, since their connectivity is formally fixed at all times during training. However, not all the connections are equally useful to the network: Consider a network encoding the approximate posterior distribution pw(y|x), parameterized by the weights w, of the task variable y given an input image x. The dependency of the final output from a specific connection can be estimated by perturbing the corresponding weight and looking at the magnitude of the change in the final distribution. Specifically, given a perturbation w′ = w + δw of the weights, the discrepancy between the pw(y|x) and the perturbed network output pw′(y|x) can be measured by their KullbackLeibler divergence, which, to second-order approximation, is given by:
Ex KL( pw′(y|x) ‖ pw(y|x) ) = δw · Fδw + o(δw2),
where the expectation over x is computed using the empirical data distribution Q̂(x) given by the dataset, and F := Ex∼Q̂(x)Ey∼pw(y|x)[∇w log pw(y|x)∇w log pw(y|x)T ] is the Fisher Information Matrix (FIM). The FIM can thus be considered a local metric measuring how much the perturbation of a single weight (or a combination of weights) affects the output of the network (Amari & Nagaoka, 2000). In particular, weights with low Fisher Information can be changed or “pruned” with little effect on the network’s performance. This suggests that the Fisher Information can be used as a measure of the effective connectivity of a DNN, or, more generally, of the “synaptic strength” of a connection (Kirkpatrick et al., 2017). Finally, the FIM is also a semidefinite approximation of the Hessian of the loss function (Martens, 2014) and hence of the curvature of the loss landscape at a particular pointw during training, providing an elegant connection between the FIM and the optimization procedure (Amari & Nagaoka, 2000), which we will also employ later.
Unfortunately, the full FIM is too large to compute. Rather, we use its trace to measure the global or layer-wise connection strength, which we can compute efficiently using (Appendix A):
tr(F ) = Ex∼Q̂(x)Ey∼pw(y|x)[‖∇w log pw(y|x)‖2]. In order to capture the behavior of the off-diagonal terms, we also tried computing the logdeterminant of the full matrix using the Kronecker-Factorized approximation of Martens & Grosse (2015), but we observed the same qualitative trend as the trace. Since the FIM is a local measure, it is very sensitive to the irregularities of the loss landscape. Therefore, in this section we mainly use ResNets, which have a relatively smooth landscape (Li et al., 2018). For other architectures we use instead a more robust estimator of the FIM based on the injection of noise in the weights (Achille & Soatto, 2018), also described in Appendix A.
Two phases of learning: As its name suggests, the FIM can be thought as a measure of the quantity of information about the training data that is contained in the model (Fisher, 1925). Based on this, one would expect the overall strength of the connections to increase monotonically as we acquire information from experience. However, this is not the case: While during an initial phase the network acquires information about the data, which results in a large increase in the strength of the connections, once the performance in the task begins to plateau, the network starts decreasing the overall strength of its connections. However, this does not correspond to a reduction in performance, rather, performance keeps slowly improving. This can be seen as a “forgetting, or “compression” phase, during which redundant connections are eliminated and non-relevant variability in the data is discarded. It is well-established how the elimination (“pruning”) of unnecessary synapses is a fundamental process during learning and brain development (Rakic et al., 1986) (Figure 4, Center); in Figure 4 (Left) an analogous phenomenon is clearly and quantitatively shown for DNNs.
Strikingly, these changes in the connection strength are closely related to the sensitivity to criticalperiod-inducing deficits such as image blur, computed using the “sliding window” method as in Figure 1. In Figure 4 we see that the sensitivity closely follows the trend of the FIM. This is remarkable since the FIM is a local quantity computed at a single point during the training of a network in the absence of deficit, while sensitivity during a critical period is computed, using test data, at the end of the impaired network training. Figure 4 (Right) further emphasizes the effect of deficits on the FIM: in the presence of a deficit, the FIM grows and remains substantially higher even after the deficit is removed. This may be attributed to the fact that, when the data are so corrupted that classification is impossible, the network is forced to memorize the labels, therefore increasing the quantity of information needed to perform the same task.
Layer-wise effects of deficits: A layer-wise analysis of the FIM sheds further light on how the deficit affects the network. When the network (in this case All-CNN, which has a clearer division among layers than ResNet) is trained without deficits, the most important connections are in the intermediate layers (Figure 5, Left), which can process the input CIFAR-10 image at the most informative intermediate scale. However, if the network is initially trained on blurred data (Figure 5, top right), the strength of the connections is dominated by the top layer (Layer 6). This is to be expected, since the low-level and mid-level structures of the images are destroyed, making the lower layers ineffective. However, if the deficit is removed early in the training (Figure 5, top center), the network manages to “reorganize”, reducing the information contained in the last layer, and, at the same time, increasing the information in the intermediate layers. We refer to these phenomena as changes in “Information Plasticity”. If, however, the data change occurs after the consolidation phase, the network is unable to change its effective connectivity: The connection strength of each layer remains substantially constant. The network has lost its Information Plasticity and is past its critical period.
Critical periods as bottleneck crossings: The analysis of the FIM also sheds light on the geometry of the loss function and the learning dynamics. Since the FIM can be interpreted as the local curvature of the residual landscape, Fig. 4 shows that learning entails crossing bottlenecks: In the initial phase the network enters regions of high curvature (high Fisher Information), and once consolidation begins, the curvature decreases, allowing it to cross the bottleneck and enter the valley below. If the statistics change after crossing the bottleneck, the network is trapped. In this interpretation, the early phases of convergence are critical in leading the network towards the “right” final valley. The end of critical periods comes after the network has crossed all bottlenecks (and thus learned the features) and entered a wide valley (region of the weight space with low curvature, or low Fisher Information).
4 DISCUSSION AND RELATED WORK
Critical periods have thus far been considered an exclusively biological phenomenon. At the same time, the analysis of DNNs has focused on asymptotic properties and neglected the initial transient behavior. To the best of our knowledge, we are the first to show that artificial neural networks exhibit critical period phenomena, and to highlight the critical role of the transient in determining the asymptotic performance of the network. Inspired by the role of synaptic connectivity in modulating critical periods, we introduce the use of Fisher Information to study this initial phase. We show that the initial sensitivity to deficits closely follows changes in the FIM, both global, as the network first rapidly increases and then decreases the amount of stored information, and layer-wise, as the network “reorganizes” its effective connectivity in order to optimally process information.
Our work naturally relates to the extensive literature on critical periods in biology. Despite artificial networks being an extremely reductionist approximation of neuronal networks, they exhibit behaviors that are qualitatively similar to the critical periods observed in human and animal models. Our information analysis shows that the initial rapid memorization phase is followed by a loss of Information Plasticity which, counterintuitively, further improves the performance. On the other hand, when combined with the analysis of Achille & Soatto (2018) this suggests that a “forgetting” phase may be desirable, or even necessary, in order to learn robust, nuisance-invariant representations.
The existence of two distinct phases of training has been observed and discussed by Shwartz-Ziv & Tishby (2017), although their analysis builds on the (Shannon) information of the activations, rather than the (Fisher) information in the weights. On a multi-layer perceptron (MLP), Shwartz-Ziv & Tishby (2017) empirically link the two phases to a sudden increase in the gradients’ covariance. It may be tempting to compare these results with our Fisher Information analysis. However, it must be noted that the FIM is computed using the gradients with respect to the model prediction, not to the ground truth label, leading to important qualitative differences. In Figure 6, we show that the covariance and norm of the gradients exhibit no clear trends during training with and without deficits, and, therefore, unlike the FIM, do not correlate with the sensitivity to critical periods. However,
a connection between our FIM analysis and the information in the activations can be established based on the work of Achille & Soatto (2018), which shows that the FIM of the weights can be used to bound the information in the activations. In fact, we may intuitively expect that pruning of connections naturally leads to loss of information in the corresponding activations. Thus, our analysis corroborates and expands on some of the claims of Shwartz-Ziv & Tishby (2017), while using an independent framework.
Aside from being more closely related to the deficit sensitivity during critical periods, Fisher’s Information also has a number of technical advantages: Its diagonal is simple to estimate, even on modern state-of-the-art architectures and compelling datasets, and it is less sensitive to the choice estimator of mutual information, avoiding some of the common criticisms to the use of information quantities in the analysis of deep learning models. Finally, the FIM allows us to probe fine changes in the effective connectivity across the layers of the network (Figure 5), which are not visible in Shwartz-Ziv & Tishby (2017).
A complete analysis of the activations should account not only for the amount of information (both task- and nuisance-related), but also for its accessibility, e.g., how easily task-related information can be extracted by a linear classifier. Following a similar idea, Montavon et al. (2011) aim to study the layer-wise, or “spatial” (but not temporal) evolution of the simplicity of the representation by performing a principal component analysis (PCA) of a radial basis function (RBF) kernel embedding of each layer representation. They show that, on a multi-layer perceptron, task-relevant information increasingly concentrate on the first principal components of the representation’s embedding, implying that they become more easily “accessible” layer after layer, while nuisance information (when it is codified at all) is encoded in the remaining components. In our work we instead focus on the temporal evolution of the weights. However, it’s important to notice that a network with simpler weights (as measured by the FIM) also requires a simpler smooth representation (as measured, e.g., by the RBF embedding) in order to operate properly, since it needs to be resistant to perturbations of the weights. Thus our analysis is wholly compatible with the intuitions of Montavon et al. (2011). It would also be interesting to study the joint spatio-temporal evolution of the network using both frameworks at once.
One advantage of focusing on the information of the weights rather than on the activations, or behavior of the network, is to have a readout of the “effective connectivity” during critical periods, which can be compared to similar readouts in animals. In fact, “behavioral” readouts upon deficit removal, both in artificial and neuronal networks, can potentially be confounded by deficit-coping changes at different levels of the visual pathways (Daw, 2014; Knudsen, 2004). On the other hand, deficits in deprived animals are mirrored by abnormalities in the circuitry of the visual pathways, which we characterize in DNNs using the FIM to study its “effective connectivity”, i.e., the connections that are actually employed by the network to solve the task. Sensitivity to critical periods and the trace of the Fisher Information peak at the same epochs, in accord with the evidence that skill development and critical periods in neuronal networks are modulated by changes (generally experience-dependent) in synaptic plasticity (Knudsen, 2004; Hensch, 2004). Our layer-wise analysis of the Fisher Information (Figure 5) also shows that visual deficits reinforce higher layers to the detriment of intermediate layers, leaving low-level layers virtually untouched. If the deficit is removed after the critical period ends, the network is not able to reverse these effects. Although the two systems are radically different, a similar response can be found in the visual pathways of animal models: Lower levels (e.g., retina, lateral geniculate nucleus) and higher-level visual areas (e.g., V2 and post-V2) show little remodeling upon deprivation, while most changes happen in different layers of V1 (Wiesel & Hubel, 1963a; Hendrickson et al., 1987).
An insightful interpretation of critical periods in animal models was proposed by Knudsen (2004): The initial connections of neuronal networks are unstable and easily modified (highly plastic), but as more “samples” are observed, they change and reach a more stable configuration which is difficult to modify. Learning can, however, still happen within the newly created connectivity pattern. This is largely compatible with our findings: Sensitivity to critical-period-inducing deficits peaks when connections are remodeled (Figure 4, Left), and different connectivity profiles are observed in networks trained with and without a deficit (Figure 5). Moreover, high-level deficits such as imageflipping and label permutation, which do not require restructuring of the network’s connections in order to be corrected, do not exhibit a critical period.
Applying a deficit at the beginning of the training may be compared to the common practice of pretraining, which is generally found to improve the performance of the network. Erhan et al. (2010) study the somewhat related, but now seldom used, practice of layer-wise unsupervised pre-training, and suggest that it may act as a regularizer by moving the weights of the network towards an area of the loss landscape closer to the attractors for good solutions, and that early examples have a stronger effect in steering the network towards particular solutions. Here, we have shown that pre-training on blurred data can have the opposite effect; i.e., it can severely decrease the final performance of the network. However, in our case, interpreting the deficits effect as moving the network close to a bad attractor is difficult to reconcile with the smooth transition observed in the critical periods, since the network would either converge to this attractor, and thus have low accuracy, or escape completely.
Instead, we reconcile our experiments with the geometry of the loss function by introducing a different explanation based on the interpretation of the FIM as an approximation of the local curvature. Figure 4 suggests that SGD encounters two different phases during the network training: At first, the network moves towards high-curvature regions of the loss landscape, while in the second phase the curvature decreases and the network eventually converges to a flat minimum (as observed in Keskar et al. (2017)). We can interpret these as the network crossing narrow bottlenecks during its training in order to learn useful features, before eventually entering a flat region of the loss surface once learning is completed and ending up trapped there. When combining this assumption with our deficit sensitivity analysis, we can hypothesize that the critical period occurs precisely upon crossing of this bottleneck. It is also worth noticing how there is evidence that convergence to flat minima (minima with low curvature) in a DNN correlates with a good generalization performance (Hochreiter & Schmidhuber, 1997; Li et al., 2018; Chaudhari et al., 2017; Keskar et al., 2017). Indeed, using this interpretation, Figure 4 (Right) tells us that networks more affected by the deficit converge to sharper minima. However, we have also found that the performance of the network is already mostly determined during the early “sensitive” phase. The final sharpness at convergence may therefore be an epiphenomenon, rather than the cause of good generalization.
5 CONCLUSION
Our goal in this paper is not so much to investigate the human (or animal) brain through artificial networks, as to understand fundamental information processing phenomena, both in their biological or artificial implementations. It is also not our goal to suggest that, since they both exhibit critical periods, DNNs are necessarily a valid model of neurobiological information processing, although recent work has emphasized this aspect. We engage in an “Artificial Neuroscience” exercise in part to address a technological need to develop “explainable” artificial intelligence systems whose behavior can be understood and predicted. While traditionally well-understood mathematical models were used by neuroscientists to study biological phenomena, information processing in modern artificial networks is often just as poorly understood as in biology, so we chose to exploit well-known biological phenomena as probes to study information processing in artificial networks.
Conversely, it would also be interesting to explore ways to test whether biological networks prune connections as a consequences of a loss of Information Plasticity, rather than as a cause. The mechanisms underlying network reconfiguration during learning and development might be an evolutionary outcome obtained under the pressure of fundamental information processing phenomena.
ACKNOWLEDGEMENTS
We thank the anonymous reviewers for their thoughtful feedback, and for suggesting new experiments and relevant literature. Supported by ONR N00014-17-1-2072, ARO W911NF-17-1-0304, AFOSR FA9550-15-1-0229 and FA8650-11-1-7156.
A DETAILS OF THE EXPERIMENTS
A.1 ARCHITECTURES AND TRAINING
In all of the experiments, unless otherwise stated, we use the following All-CNN architecture, adapted from Springenberg et al. (2014):
conv 96 - conv 96 - conv 192 s2 - conv 192 - conv 192 - conv 192 s2 - conv 192 - conv1 192 - conv1 10 - avg. pooling - softmax
where each conv block consists of a 3× 3 convolution, batch normalization and ReLU activations. conv1 denotes a 1 × 1 convolution. The network is trained with SGD, with a batch size of 128, learning rate starting from 0.05 and decaying smoothly by a factor of .97 at each epoch. We also use weight decay with coefficient 0.001. In the experiments with a fixed learning rate, we fix the learning rate to 0.001, which we find to allow convergence without excessive overfitting. For the ResNet experiments, we use the ResNet-18 architecture from He et al. (2016) with initial learning rate 0.1, learning rate decay .97 per epoch, and weight decay 0.0005. When training with Adam, we use a learning rate of 0.001 and weight decay 0.0001.
When experimenting with varying network depths, we use the following architecture:
conv 96 - [conv 96 · 2i−1 - conv 96 · 2i s2]ni=1 - conv 96 · 2n - conv1 96 · 2n - conv1 10
In order to avoid interferences between the annealing scheme and the architecture, in these experiments we fix the learning rate to 0.001.
The Fully Connected network used for the MNIST experiments has hidden layers of size [2500, 2000, 1500, 1000, 500]. All hidden layers use batch normalization followed by ReLU activations. We fix the learning rate to 0.005. Weight decay is not used. We use data augmentation with random translations up to 4 pixels and random horizontal flipping. For MNIST, we pad the images with zeros to bring them to size 32× 32.
A.2 APPROXIMATIONS OF THE FISHER INFORMATION MATRIX
To compute the trace of the Fisher Information Matrix, we use the following expression derived directly from the definition:
tr(F ) = Ex∼Q̂(x)Ey∼pw(y|x)[tr(∇w log pw(y|x)∇w log pw(y|x)T )] = Ex∼Q̂(x)Ey∼pw(y|x)[‖∇w log pw(y|x)‖2],
where the input image x is sampled from the dataset, while the label y is sampled from the output posterior. Expectations are approximated by Monte-Carlo sampling. Notice, however, that this expression depends only on the local gradients of the loss with respect to the weights at a point w = w0, so it can be noisy when the loss landscape is highly irregular. This is not a problem for ResNets Li et al. (2018), but for other architectures we use instead a different technique, proposed in Achille & Soatto (2018). More in detail, let L(w) be the standard cross-entropy loss. Given the current weights w0 of the network, we find the diagonal matrix Σ that minimizes:
L′ = Ew∼N(w0,Σ)[L(w)]− β log |Σ|,
where β is a parameter that controls the smoothness of the approximation. Notice that L′ can be minimized efficiently using the method in Kingma et al. (2015). To see how this relates to the Fisher Information Matrix, assume that L(w) can be approximated locally in w0 as L(w) = L0 + a · w + w ·Hw. We can then rewrite L′ as
L′ = L0 + tr(ΣH)− β log |Σ|.
Taking the derivative with respect to Σ, and setting it to zero, we obtain Σii = β/Hii. We can then use Σ to estimate the trace of the Hessian, and hence of the Fisher information.
A.3 CURVE FITTING
Fitting of sensitivity curves and synaptic density profiles from the literature was performed using:
f(t) = e−(t−d)/τ1 − ke−(t−d)/τ2
as the fitting equation, where t is the age at the time of sampling and τ1, τ2, k and d are unconstrained parameters (Banks et al., 1975).
The exponential fit of the sensitivity to the Fisher Information trace uses the expression
F (t) = a exp(cSk(t)) + b,
where a, b and c are unconstrained parameters, F (t) is the Fisher Information trace at epoch t of the training of a network without deficits and Sk is the sensitivity computed using a window of size k. That is, Sk(t) is the increase in the final test error over a baseline when the network is trained in the presence of a deficit between epochs t and t+ k.
B ADDITIONAL PLOTS
No deficit Blurred up to epoch 100
Always blurred
C EXPERIMENTAL DESIGN AND COMPARISON WITH ANIMAL MODELS
Critical periods are task- and deficit-specific. The specific task we address is visual acuity, but the performance is necessarily measured through different mechanisms in animals and Artificial Neural Networks. In animals, visual acuity is traditionally measured by testing the ability to discriminate between black-and-white contrast gratings (with varying spatial frequency) and a uniform gray field. The outcome of such tests generally correlates well with the ability of the animal to use the eye to solve other visual tasks relying on acuity. Convolutional Neural Networks, on the other hand, have a very different sensory processing mechanism (based on heavily quantized data), which may trivialize such a test. Rather, we directly measure the performance of the network on an high-level task, specifically image classification, for which CNNs are optimized.
We chose to simulate cataracts in our DNN experiments, a deficit which allows us to explore its complex interactions with the structure of the data and the architecture of the network. Unfortunately, while the overall trends of cataract-induced critical periods have been studied and understood in animal models, there is not enough data to confidently regress sensibility curves comparable to those obtained in DNNs. For this reason, in Figure 1 we compare the performance loss in a DNN trained in the presence of a cataract-like deficit with the results obtained from monocularly deprived kittens, which exhibit similar trends and are one of the most common experimental paradigms in the visual neurosciences.
Simulating complete visual deprivation in a neural network is not as simple as feeding a constant stimulus: a network presented with a constant blank input will rapidly become trivial and thus unable to train on new data. This is to be expected, since a blank input is a perfectly predictable stimulus and thus the network can quickly learn the (trivial) solution to the task. We instead wanted to model an uninformative stimulus, akin to noise. Moreover, even when the eyes are sutured or maintained in the darkness, there will be background excitation of photoreceptors that is best modeled as noise. To account for this, we simulate sensory deprivation by replacing the input images with a dataset composed of (uninformative) random Gaussian noise. This way the network is trained on solving the highly non-trivial task of memorizing the association between the finitely-many noise patterns and their corresponding labels. | 1. What are the strengths of the paper regarding its artificial neural network experiments?
2. What are the interesting phenomena shown in the experiments that could inspire other researchers?
3. How does the reviewer assess the correlation between the effects of different kinds of deficits and the two phases observed in the variations of the trace of the Fisher information matrix?
4. How does the reviewer evaluate the paper's writing quality, comments, and experimental design?
5. Are there any limitations or concerns regarding the comparison between real brains and back-prop trained multilayer neural networks? | Review | Review
Let's be frank: I have never been a fan of comparing real brains with back-prop trained multilayer neural networks that have little to do with real neurons. For instance, I am unmoved when Figure 1 compares multilayer network simulations with experimental data on actual kitten. More precisely, I see such comparisons as cheap shots.
However, after forgetting about the kitten, I can see lots of good things in this paper. The artificial neural network experiments designed by the authors show interesting phenomena in a manner that is amenable to replication. The experiments about the varied effects of different kinds of deficits are particularly interesting and could inspire other researchers in creating mathematical models for these striking differences. The authors also correlate these effects with the two phases they observe in the variations of the trace of the Fisher information matrix. This is reminiscent of Tishby's bottleneck view on neural networks, but different in interesting ways. To start with, the trace of the Fisher information matrix is much easier to estimate than Tishby's mutual information between patterns, labels, and layer activation. It also might represent something of a different nature, in ways that I do not understand at this point.
In addition the paper is very well written, the comments are well though, and the experiments seem easy to replicate.
Given all these qualities, I'll gladly take the kitten as well.. |
ICLR | Title
Critical Learning Periods in Deep Networks
Abstract
Similar to humans and animals, deep artificial neural networks exhibit critical periods during which a temporary stimulus deficit can impair the development of a skill. The extent of the impairment depends on the onset and length of the deficit window, as in animal models, and on the size of the neural network. Deficits that do not affect low-level statistics, such as vertical flipping of the images, have no lasting effect on performance and can be overcome with further training. To better understand this phenomenon, we use the Fisher Information of the weights to measure the effective connectivity between layers of a network during training. Counterintuitively, information rises rapidly in the early phases of training, and then decreases, preventing redistribution of information resources in a phenomenon we refer to as a loss of “Information Plasticity”. Our analysis suggests that the first few epochs are critical for the creation of strong connections that are optimal relative to the input data distribution. Once such strong connections are created, they do not appear to change during additional training. These findings suggest that the initial learning transient, under-scrutinized compared to asymptotic behavior, plays a key role in determining the outcome of the training process. Our findings, combined with recent theoretical results in the literature, also suggest that forgetting (decrease of information in the weights) is critical to achieving invariance and disentanglement in representation learning. Finally, critical periods are not restricted to biological systems, but can emerge naturally in learning systems, whether biological or artificial, due to fundamental constrains arising from learning dynamics and information processing.
1 INTRODUCTION
Critical periods are time windows of early post-natal development during which sensory deficits can lead to permanent skill impairment (Kandel et al., 2013). Researchers have documented critical periods affecting a range of species and systems, from visual acuity in kittens (Wiesel & Hubel, 1963b; Wiesel, 1982) to song learning in birds (Konishi, 1985). Uncorrected eye defects (e.g., strabismus, cataracts) during the critical period for visual development lead to amblyopia in one in fifty adults.
The cause of critical periods is ascribed to the biochemical modulation of windows of neuronal plasticity (Hensch, 2004). In this paper, however, we show that deep neural networks (DNNs), while completely devoid of such regulations, respond to sensory deficits in ways similar to those observed in humans and animal models. This surprising result suggests that critical periods may arise from information processing, rather than biochemical, phenomena.
We propose using the information in the weights, measured by an efficient approximation of the Fisher Information, to study critical period phenomena in DNNs. We show that, counterintuitively, the information in the weights does not increase monotonically during training. Instead, a rapid growth in information (“memorization phase”) is followed by a reduction of information (“reorganization” or “forgetting” phase), even as classification performance keeps increasing. This behavior is consistent across different tasks and network architectures. Critical periods are centered in the memorization phase.
∗These authors contributed equally to this work.
Published as a conference paper at ICLR 2019
Under review as a conference paper at ICLR 2019
Figure 1: DNNs exhibit critical periods. (A) Final accuracy achieved by a CNN trained with a cataract-like deficit as a function of the training epoch N at which deficit is removed (solid line). Performance is permanently impaired if the deficit is not corrected early enough, regardless of how much additional training is performed. As in animal models, critical periods coincide with the early learning phase during which test accuracy would rapidly increase in the absence of deficits (dashed). (B) For comparison, we report acuity for kittens monocularly deprived since birth and tested at the time of eye-opening (solid), and normal development visual acuity in kittens as a function of age (dashed) (Giffin & Mitchell, 1978; Mitchell, 1988).
artificial neural networks (ANNs) are only loosely inspired by biological systems (Hassabis et al., 2017).
Most studies to date have focused either on the behavior of networks at convergence (Representation
Learning) or on the asymptotic properties of the numerical scheme used to get there (Optimization).
The role of the initial transient, especially its effect in biasing the network towards “good” regions
of the complex and high-dimensional optimization problem, is rarely addressed. To study this initial learning phase of ANNs, we replicate experiments performed in animal models and find that the responses to early deficits are remarkably similar, despite the large underlying differences between the two systems. In particular, we show that the quality of the solution depends only minimally on the final, relatively well-understood, phase of the training process or on its very first epochs; instead, it depends critically on the period prior to initial convergence.
In animals, sensory deficits introduced during critical periods induce changes in the architecture of the corresponding areas (Daw, 2014; Wiesel & Hubel, 1963a; Hendrickson et al., 1987). To determine whether a similar phenomenon exists in ANNs, we compute the Fisher Information of the weights of the network as a proxy to measure its “effective connectivity”, that is, the density of connections that are effectively used by the network in order to solve the task. Like others before us (Shwartz-Ziv & Tishby, 2017), we observe two distinct phases during the training, first a “learning phase” in which the Fisher Information of the weights increases as the network learns from the data, followed by a “consolidation” or “compression” phase in which the Fisher Information decreases and stabilizes. Sensitivity to critical-period-inducing deficits is maximal exactly when the Fisher Information peaks.
A layer-wise analysis of the network’s effective connectivity shows that, in the tasks and deficits we consider, the hierarchy of low-level and high-level features in the training data is a key aspect behind the observed phenomena. In particular, our experiments suggest that the existence of critical periods in deep neural networks depends on the inability of the network to change its effective connectivity pattern in order to process different information (in response to deficit removal). We call this phenomenon, which is not mediated by any external factors, a loss of the “Information Plasticity” of the network.
2 RELATED WORK
3 DEEP ARTIFICIAL NEURAL NETWORKS EXHIBIT CRITICAL PERIODS
A notable example of critical period-inducing deficit, which also commonly affects humans, is amblyopia (reduced visual acuity in one eye) caused unilateral cataracts during infancy or childhood (Vaegan & Taylor, 1979; von Noorden, 1981): Even after surgical correction of the cataracts, the
Under review as a conference paper at ICLR 2019
Figure 2: Sensitivity of learning phase: (C) Final test accuracy of a DNN as a function of the onset
of a short 40-epoch deficit. The decrease in the final performance can be used to measure the sensitivity to deficits. The most sensitive epochs corresponds to the early rapid learning phase, before the test error (dashed line) begins to plateau. Afterwards, the network is largely unaffected by the temporary deficit. (D) This can be compared with changes in the degree of functional disconnection (normalized numbers of V1 monocular cells disconnected from the contralateral eye) as a function of the kittens’ age at the onset of a 10-12-day deficit window (Olson & Freeman, 1980). Dashed lines are as in A and B respectively.
ability of the patients to regain normal acuity in the affected eye depends both on the duration of the deficit and on its age of onset, with earlier and longer deficits causing more severe effects.
In order to replicate this experimental setup in ANNs, we train a standard convolutional network (CNN) to classify objects in small 32 ⇥ 32 RGB images from the CIFAR-10 dataset (Krizhevsky & Hinton, 2009) in 10 classes. To simulate the effect of cataracts, for the first t0 epochs the images in the dataset are downsampled to 8⇥8 and then upsampled back to 32⇥32 using bilinear interpolation, in practice blurring the image and destroying small-scale details.1 After that, the training continues for 300 more epochs, giving the network enough time to converge and ensuring it is exposed to the same number of uncorrupted images as in the control (t0 = 0) experiment.
In Figure 1, we graph the final performance of the network (described in Materials and Methods) as a function of the epoch at which the deficit is corrected (t0).We clearly observe the existence of a
critical period for this deficit in the ANN: if the blur is not removed within the first 60 epochs, the
final performance is severely decreased when compared to the baseline (from a test error of ⇠6.4%,
[In this plot it is 8%, it is 6.4% for th resnet later. We can swap the plots or change the text] in the
absence of a deficit, to more than 18% when the blur is present over 140 epochs, a ⇠300% increase).
The profile of the curve is also strikingly similar to the one obtained in kittens monocularly deprived
from near birth and whose visual acuity upon eye-opening was tested and plotted against the length
of the eficit window (Mitchell, 1988). Just like in human and animal models (where critical
periods are characteristic of early development), the critical period in the DNN also arises during
the initial rapid learning phase. At this stage, the network is quickly learning a solution before the
test error plateaus and the longer asymptotic convergence phase begins.
Sensitivity to deficit. To quantify more accurately the sensitivity of the ANN to image blurring throughout its early learning phase, we introduced the deficit in a short constant window (40 epochs), starting at different epochs, and then measured the decrease in the ANN’s final performance compared to the baseline. In Figure 2, we plot the final testing error of the network against the epoch of onset of the deficit. We observe that the network’s sensitivity to blurring peaks in the central part of the early rapid learning phase (around 30 epochs), while later deficits produce little or no effect. A similar experiment was also performed kittens by Olson and Freeman, using a window of 10-12 days during which the animals were monocularly deprived and using it to “scan” the first 4 months after birth to obtain a sensitivity profile (Olson & Freeman, 1980).
We subsequently evaluated the effect of other training data modifications: a more drastic deprivation where the input is substituted with random noise, simulating complete sensory deprivation, and two “high-level” modifications of the training data: vertical flipping of the input image and permutation
1We employed this method, instead of a simpler Gaussian blur, since it has a very similar effect and makes the quantification of information loss clearer.
3
Figure 1: DNNs exhibit critical periods. (A) Final accuracy achieved by a CNN trained with a cataract-like deficit as a function of the training epoch N at which the deficit is removed (solid line). Performance is permanently impaired if the fici is no corrected early enough, r gardless of how much additional training is performed. As in animal models, critical periods coincide with the early learning phase during which, in the absence of deficits, test accuracy would rapidly increase (dashed). (B) For comparison, we r p rt acuity fo kittens monocularly d p ived since birth and tested at the time of eye-opening (solid), and normal visual acuity development (in kittens) as a function of their age (dashed) (Giffin & Mitchell, 1978; Mitchell, 1988). Sensitivity during learning: (C) Final test accuracy of a DNN as a function of the onset of a short 40-epoch deficit. The decrease in the final performance can be used to measure t e sensi ivity to defi ts. The most sensitive epochs corresponds to the early rapid learning phase, before the test error (dashed line) begins to plateau. Afterwards, the network is largely unaffected by the temporary deficit. (D) This can be compared with changes in the degr of functio al disconnection (normalized numbers of V1 monocular cells disconnected from the contralateral eye) as a function of the kittens’ age at the onset of a 10-12-day deficit window (Olson & Freeman, 1980). Dashed lines are as in A and B respectively, up to a re-scaling of the y-axis.
Our findings, described in Section 2, indicate that the early tra sient is critic l in determining the final solution of the optimization associated with training an artificial neural network. In particular, the effects of sensory deficits during a critical period cannot be overcome, no matter how much additional training is performed. Yet most theoretical studies have focused on the network behavior after converge ce (Representation Learning) or on the asymptotic properties of the optimization scheme used for training (SG ).
To study this early phase, in Section 3, we use the Fisher Information to quantify the effective connectivity of a network during training, and introduce the notion of Information Plasticity in learning. Information Plasticity is maximal during the memorization phase, and decreases in the reorganization phase. We show that deficit sensitivity during critical periods correlates strongly with the effective connectivity.
In Section 4 we discuss our contribution in relation to previous work. When considered in conjunction with recent results on representation learning (Achille & Soatto, 2018), our findings indicate that forgetting (reducing information in the weights) is critical to achieving invariance to nuisance variability as well as independence of the components of the representation, but comes at the price of reduced adaptability later in the training. We also hypothesize that the loss of physical connectivity in biology (neural plasticity) could be a consequence, rather than a cause, of the loss of Information Plasticity, which depends on how the information is distributed throughout a network during the early stages of learning. These results also shed light on the common practice of pre-training a model on a task and then fine-tune it for another, one of the most rudimentary forms of transfer learning. Our experiments show that, rather than helpful, pre-training can be detrimental, even if the tasks are similar (e.g., same labels, slightly blurred images).
2 EXPERIMENTS
notable e a l f iti related deficit, commonly affecting humans, is amblyopia (reduced visual a uity in one eye) caused b cat racts during infancy or childhood (Taylor et al., 1979;
2
von Noorden, 1981). Even after surgical correction of cataracts, the ability of the patients to regain normal acuity in the affected eye depends both on the duration of the deficit and on its age of onset, with earlier and longer deficits causing more severe effects. In this section, we aim to study the effects of similar deficits in DNNs. To do so, we train a standard All-CNN architecture based on Springenberg et al. (2014) (see Appendix A) to classify objects in small 32 × 32 images from the CIFAR-10 dataset (Krizhevsky & Hinton, 2009). We train with SGD using an exponential annealing schedule for the learning rate. To simulate the effect of cataracts, for the first t0 epochs the images in the dataset are downsampled to 8 × 8 and then upsampled back to 32 × 32 using bilinear interpolation, in practice blurring the image and destroying small-scale details.1 After that, the training continues for 160 more epochs, giving the network time to converge and ensuring it is exposed to the same number of uncorrupted images as in the control (t0 = 0) experiment.
DNNs exhibit critical periods: In Figure 1, we plot the final performance of a network affected by the deficit as a function of the epoch t0 at which the deficit is corrected. We can readily observe the existence of a critical period: If the blur is not removed within the first 40-60 epochs, the final performance is severely decreased when compared to the baseline (up to a threefold increase in error). The decrease in performance follows trends commonly observed in animals, and may be qualitatively compared, for example, to the loss of visual acuity observed in kittens monocularly deprived from birth as a function of the length of the deficit (Mitchell, 1988).2
We can measure more accurately the sensitivity to a blur deficit during learning by introducing the deficit in a short window of constant length (40 epochs), starting at different epochs, and then measure the decrease in the DNN’s final performance compared to the baseline (Figure 1). Doing this, we observe that the sensitivity to the deficit peaks in the central part of the early rapid learning phase (at around 30 epochs), while introducing the deficit later produces little or no effect. A similar experiment performed on kittens, using a window of 10-12 days during which the animals are monocularly deprived, again shows a remarkable similarity between the profiles of the sensitivity curves (Olson & Freeman, 1980).
High-level deficits are not associated with a critical period: A natural question is whether any change in the input data distribution will have a corresponding critical period for learning. This is not the case for neuronal networks, which remain plastic enough to adapt to high-level changes in sensory processing (Daw, 2014). For example, it is well-reported that even adult humans can rapidly adapt to certain drastic changes, such as the inversion of the visual field (Stratton, 1896; Kohler, 1964). In Figure 2, we observe that DNNs are also largely unaffected by high-level deficits – such as vertical flipping of the image, or random permutation of the output labels: After deficit correction, the network quickly recovers its baseline performance. This hints at a finer interplay between the
1We employed this method, instead of a simpler Gaussian blur, since it has a very similar effect and makes the quantification of information loss clearer.
2See Appendix C for details on how to compare different models and deficits.
structure of the data distribution and the optimization algorithm, resulting in the existence of a critical period.
Sensory deprivation: We now apply to the network a more drastic deficit, where each image is replaced by white noise. Figure 2 shows hows this extreme deficit exhibits a remarkably less severe effect than the one obtained by only blurring images: Training the network with white noise does not provide any information on the natural images, and results in milder effects than those caused by a deficit (e.g., image blur), which instead conveys some information, but leads the network to (incorrectly) learn that no fine structure is present in the images. A similar effect has been observed in animals, where a period of early sensory deprivation (dark-rearing) can lengthen the critical period and thus cause less severe effects than those documented in light-reared animals (Mower, 1991). We refer the reader to Appendix C for a more detailed comparison between sensory deprivation and training on white noise.
Architecture, depth, and learning rate annealing: Figure 3 shows that a fully-connected network trained on the MNIST digit classification dataset also shows a critical period for the image blur deficit. Therefore, the convolutional structure is not necessary, nor is the use of natural images. Similarly, a ResNet-18 trained on CIFAR-10 also has a critical period, which is also remarkably sharper than the one found in a standard convolutional network (Figure 1). This is especially interesting, since ResNets allow for easier backpropagation of gradients to the lower layers, thus suggesting that the critical period is not caused by vanishing gradients. However, Figure 2 (Right) shows that the presence of a critical period does indeed depend critically on the depth of the network. In Figure 3, we confirm that a critical period exists even when the network is trained with a constant learning rate, and therefore cannot be explained by an annealed learning rate in later epochs.
Optimization method and weight decay: Figure 3 (Bottom Right) shows that when using Adam as the optimization scheme, which renormalizes the gradients using a running mean of their first two moments, we still observe a critical period similar to that of standard SGD. However, changing the
hyperparameters of the optimization can change the shape of the critical period: In Figure 3 (Bottom Left) we show that increasing weight decay makes critical periods longer and less sharp. This can be explained as it both slows the convergence of the network, and it limits the ability of higher layers to change to overcome the deficit, thus encouraging lower layers to also learn new features.
3 FISHER INFORMATION ANALYSIS
We have established empirically that, in animals and DNNs alike, the initial phases of training are critical to the outcome of the training process. In animals, this strongly relates to changes in the brain architecture of the areas associated with the deficit (Daw, 2014). This is inevitably different in artificial networks, since their connectivity is formally fixed at all times during training. However, not all the connections are equally useful to the network: Consider a network encoding the approximate posterior distribution pw(y|x), parameterized by the weights w, of the task variable y given an input image x. The dependency of the final output from a specific connection can be estimated by perturbing the corresponding weight and looking at the magnitude of the change in the final distribution. Specifically, given a perturbation w′ = w + δw of the weights, the discrepancy between the pw(y|x) and the perturbed network output pw′(y|x) can be measured by their KullbackLeibler divergence, which, to second-order approximation, is given by:
Ex KL( pw′(y|x) ‖ pw(y|x) ) = δw · Fδw + o(δw2),
where the expectation over x is computed using the empirical data distribution Q̂(x) given by the dataset, and F := Ex∼Q̂(x)Ey∼pw(y|x)[∇w log pw(y|x)∇w log pw(y|x)T ] is the Fisher Information Matrix (FIM). The FIM can thus be considered a local metric measuring how much the perturbation of a single weight (or a combination of weights) affects the output of the network (Amari & Nagaoka, 2000). In particular, weights with low Fisher Information can be changed or “pruned” with little effect on the network’s performance. This suggests that the Fisher Information can be used as a measure of the effective connectivity of a DNN, or, more generally, of the “synaptic strength” of a connection (Kirkpatrick et al., 2017). Finally, the FIM is also a semidefinite approximation of the Hessian of the loss function (Martens, 2014) and hence of the curvature of the loss landscape at a particular pointw during training, providing an elegant connection between the FIM and the optimization procedure (Amari & Nagaoka, 2000), which we will also employ later.
Unfortunately, the full FIM is too large to compute. Rather, we use its trace to measure the global or layer-wise connection strength, which we can compute efficiently using (Appendix A):
tr(F ) = Ex∼Q̂(x)Ey∼pw(y|x)[‖∇w log pw(y|x)‖2]. In order to capture the behavior of the off-diagonal terms, we also tried computing the logdeterminant of the full matrix using the Kronecker-Factorized approximation of Martens & Grosse (2015), but we observed the same qualitative trend as the trace. Since the FIM is a local measure, it is very sensitive to the irregularities of the loss landscape. Therefore, in this section we mainly use ResNets, which have a relatively smooth landscape (Li et al., 2018). For other architectures we use instead a more robust estimator of the FIM based on the injection of noise in the weights (Achille & Soatto, 2018), also described in Appendix A.
Two phases of learning: As its name suggests, the FIM can be thought as a measure of the quantity of information about the training data that is contained in the model (Fisher, 1925). Based on this, one would expect the overall strength of the connections to increase monotonically as we acquire information from experience. However, this is not the case: While during an initial phase the network acquires information about the data, which results in a large increase in the strength of the connections, once the performance in the task begins to plateau, the network starts decreasing the overall strength of its connections. However, this does not correspond to a reduction in performance, rather, performance keeps slowly improving. This can be seen as a “forgetting, or “compression” phase, during which redundant connections are eliminated and non-relevant variability in the data is discarded. It is well-established how the elimination (“pruning”) of unnecessary synapses is a fundamental process during learning and brain development (Rakic et al., 1986) (Figure 4, Center); in Figure 4 (Left) an analogous phenomenon is clearly and quantitatively shown for DNNs.
Strikingly, these changes in the connection strength are closely related to the sensitivity to criticalperiod-inducing deficits such as image blur, computed using the “sliding window” method as in Figure 1. In Figure 4 we see that the sensitivity closely follows the trend of the FIM. This is remarkable since the FIM is a local quantity computed at a single point during the training of a network in the absence of deficit, while sensitivity during a critical period is computed, using test data, at the end of the impaired network training. Figure 4 (Right) further emphasizes the effect of deficits on the FIM: in the presence of a deficit, the FIM grows and remains substantially higher even after the deficit is removed. This may be attributed to the fact that, when the data are so corrupted that classification is impossible, the network is forced to memorize the labels, therefore increasing the quantity of information needed to perform the same task.
Layer-wise effects of deficits: A layer-wise analysis of the FIM sheds further light on how the deficit affects the network. When the network (in this case All-CNN, which has a clearer division among layers than ResNet) is trained without deficits, the most important connections are in the intermediate layers (Figure 5, Left), which can process the input CIFAR-10 image at the most informative intermediate scale. However, if the network is initially trained on blurred data (Figure 5, top right), the strength of the connections is dominated by the top layer (Layer 6). This is to be expected, since the low-level and mid-level structures of the images are destroyed, making the lower layers ineffective. However, if the deficit is removed early in the training (Figure 5, top center), the network manages to “reorganize”, reducing the information contained in the last layer, and, at the same time, increasing the information in the intermediate layers. We refer to these phenomena as changes in “Information Plasticity”. If, however, the data change occurs after the consolidation phase, the network is unable to change its effective connectivity: The connection strength of each layer remains substantially constant. The network has lost its Information Plasticity and is past its critical period.
Critical periods as bottleneck crossings: The analysis of the FIM also sheds light on the geometry of the loss function and the learning dynamics. Since the FIM can be interpreted as the local curvature of the residual landscape, Fig. 4 shows that learning entails crossing bottlenecks: In the initial phase the network enters regions of high curvature (high Fisher Information), and once consolidation begins, the curvature decreases, allowing it to cross the bottleneck and enter the valley below. If the statistics change after crossing the bottleneck, the network is trapped. In this interpretation, the early phases of convergence are critical in leading the network towards the “right” final valley. The end of critical periods comes after the network has crossed all bottlenecks (and thus learned the features) and entered a wide valley (region of the weight space with low curvature, or low Fisher Information).
4 DISCUSSION AND RELATED WORK
Critical periods have thus far been considered an exclusively biological phenomenon. At the same time, the analysis of DNNs has focused on asymptotic properties and neglected the initial transient behavior. To the best of our knowledge, we are the first to show that artificial neural networks exhibit critical period phenomena, and to highlight the critical role of the transient in determining the asymptotic performance of the network. Inspired by the role of synaptic connectivity in modulating critical periods, we introduce the use of Fisher Information to study this initial phase. We show that the initial sensitivity to deficits closely follows changes in the FIM, both global, as the network first rapidly increases and then decreases the amount of stored information, and layer-wise, as the network “reorganizes” its effective connectivity in order to optimally process information.
Our work naturally relates to the extensive literature on critical periods in biology. Despite artificial networks being an extremely reductionist approximation of neuronal networks, they exhibit behaviors that are qualitatively similar to the critical periods observed in human and animal models. Our information analysis shows that the initial rapid memorization phase is followed by a loss of Information Plasticity which, counterintuitively, further improves the performance. On the other hand, when combined with the analysis of Achille & Soatto (2018) this suggests that a “forgetting” phase may be desirable, or even necessary, in order to learn robust, nuisance-invariant representations.
The existence of two distinct phases of training has been observed and discussed by Shwartz-Ziv & Tishby (2017), although their analysis builds on the (Shannon) information of the activations, rather than the (Fisher) information in the weights. On a multi-layer perceptron (MLP), Shwartz-Ziv & Tishby (2017) empirically link the two phases to a sudden increase in the gradients’ covariance. It may be tempting to compare these results with our Fisher Information analysis. However, it must be noted that the FIM is computed using the gradients with respect to the model prediction, not to the ground truth label, leading to important qualitative differences. In Figure 6, we show that the covariance and norm of the gradients exhibit no clear trends during training with and without deficits, and, therefore, unlike the FIM, do not correlate with the sensitivity to critical periods. However,
a connection between our FIM analysis and the information in the activations can be established based on the work of Achille & Soatto (2018), which shows that the FIM of the weights can be used to bound the information in the activations. In fact, we may intuitively expect that pruning of connections naturally leads to loss of information in the corresponding activations. Thus, our analysis corroborates and expands on some of the claims of Shwartz-Ziv & Tishby (2017), while using an independent framework.
Aside from being more closely related to the deficit sensitivity during critical periods, Fisher’s Information also has a number of technical advantages: Its diagonal is simple to estimate, even on modern state-of-the-art architectures and compelling datasets, and it is less sensitive to the choice estimator of mutual information, avoiding some of the common criticisms to the use of information quantities in the analysis of deep learning models. Finally, the FIM allows us to probe fine changes in the effective connectivity across the layers of the network (Figure 5), which are not visible in Shwartz-Ziv & Tishby (2017).
A complete analysis of the activations should account not only for the amount of information (both task- and nuisance-related), but also for its accessibility, e.g., how easily task-related information can be extracted by a linear classifier. Following a similar idea, Montavon et al. (2011) aim to study the layer-wise, or “spatial” (but not temporal) evolution of the simplicity of the representation by performing a principal component analysis (PCA) of a radial basis function (RBF) kernel embedding of each layer representation. They show that, on a multi-layer perceptron, task-relevant information increasingly concentrate on the first principal components of the representation’s embedding, implying that they become more easily “accessible” layer after layer, while nuisance information (when it is codified at all) is encoded in the remaining components. In our work we instead focus on the temporal evolution of the weights. However, it’s important to notice that a network with simpler weights (as measured by the FIM) also requires a simpler smooth representation (as measured, e.g., by the RBF embedding) in order to operate properly, since it needs to be resistant to perturbations of the weights. Thus our analysis is wholly compatible with the intuitions of Montavon et al. (2011). It would also be interesting to study the joint spatio-temporal evolution of the network using both frameworks at once.
One advantage of focusing on the information of the weights rather than on the activations, or behavior of the network, is to have a readout of the “effective connectivity” during critical periods, which can be compared to similar readouts in animals. In fact, “behavioral” readouts upon deficit removal, both in artificial and neuronal networks, can potentially be confounded by deficit-coping changes at different levels of the visual pathways (Daw, 2014; Knudsen, 2004). On the other hand, deficits in deprived animals are mirrored by abnormalities in the circuitry of the visual pathways, which we characterize in DNNs using the FIM to study its “effective connectivity”, i.e., the connections that are actually employed by the network to solve the task. Sensitivity to critical periods and the trace of the Fisher Information peak at the same epochs, in accord with the evidence that skill development and critical periods in neuronal networks are modulated by changes (generally experience-dependent) in synaptic plasticity (Knudsen, 2004; Hensch, 2004). Our layer-wise analysis of the Fisher Information (Figure 5) also shows that visual deficits reinforce higher layers to the detriment of intermediate layers, leaving low-level layers virtually untouched. If the deficit is removed after the critical period ends, the network is not able to reverse these effects. Although the two systems are radically different, a similar response can be found in the visual pathways of animal models: Lower levels (e.g., retina, lateral geniculate nucleus) and higher-level visual areas (e.g., V2 and post-V2) show little remodeling upon deprivation, while most changes happen in different layers of V1 (Wiesel & Hubel, 1963a; Hendrickson et al., 1987).
An insightful interpretation of critical periods in animal models was proposed by Knudsen (2004): The initial connections of neuronal networks are unstable and easily modified (highly plastic), but as more “samples” are observed, they change and reach a more stable configuration which is difficult to modify. Learning can, however, still happen within the newly created connectivity pattern. This is largely compatible with our findings: Sensitivity to critical-period-inducing deficits peaks when connections are remodeled (Figure 4, Left), and different connectivity profiles are observed in networks trained with and without a deficit (Figure 5). Moreover, high-level deficits such as imageflipping and label permutation, which do not require restructuring of the network’s connections in order to be corrected, do not exhibit a critical period.
Applying a deficit at the beginning of the training may be compared to the common practice of pretraining, which is generally found to improve the performance of the network. Erhan et al. (2010) study the somewhat related, but now seldom used, practice of layer-wise unsupervised pre-training, and suggest that it may act as a regularizer by moving the weights of the network towards an area of the loss landscape closer to the attractors for good solutions, and that early examples have a stronger effect in steering the network towards particular solutions. Here, we have shown that pre-training on blurred data can have the opposite effect; i.e., it can severely decrease the final performance of the network. However, in our case, interpreting the deficits effect as moving the network close to a bad attractor is difficult to reconcile with the smooth transition observed in the critical periods, since the network would either converge to this attractor, and thus have low accuracy, or escape completely.
Instead, we reconcile our experiments with the geometry of the loss function by introducing a different explanation based on the interpretation of the FIM as an approximation of the local curvature. Figure 4 suggests that SGD encounters two different phases during the network training: At first, the network moves towards high-curvature regions of the loss landscape, while in the second phase the curvature decreases and the network eventually converges to a flat minimum (as observed in Keskar et al. (2017)). We can interpret these as the network crossing narrow bottlenecks during its training in order to learn useful features, before eventually entering a flat region of the loss surface once learning is completed and ending up trapped there. When combining this assumption with our deficit sensitivity analysis, we can hypothesize that the critical period occurs precisely upon crossing of this bottleneck. It is also worth noticing how there is evidence that convergence to flat minima (minima with low curvature) in a DNN correlates with a good generalization performance (Hochreiter & Schmidhuber, 1997; Li et al., 2018; Chaudhari et al., 2017; Keskar et al., 2017). Indeed, using this interpretation, Figure 4 (Right) tells us that networks more affected by the deficit converge to sharper minima. However, we have also found that the performance of the network is already mostly determined during the early “sensitive” phase. The final sharpness at convergence may therefore be an epiphenomenon, rather than the cause of good generalization.
5 CONCLUSION
Our goal in this paper is not so much to investigate the human (or animal) brain through artificial networks, as to understand fundamental information processing phenomena, both in their biological or artificial implementations. It is also not our goal to suggest that, since they both exhibit critical periods, DNNs are necessarily a valid model of neurobiological information processing, although recent work has emphasized this aspect. We engage in an “Artificial Neuroscience” exercise in part to address a technological need to develop “explainable” artificial intelligence systems whose behavior can be understood and predicted. While traditionally well-understood mathematical models were used by neuroscientists to study biological phenomena, information processing in modern artificial networks is often just as poorly understood as in biology, so we chose to exploit well-known biological phenomena as probes to study information processing in artificial networks.
Conversely, it would also be interesting to explore ways to test whether biological networks prune connections as a consequences of a loss of Information Plasticity, rather than as a cause. The mechanisms underlying network reconfiguration during learning and development might be an evolutionary outcome obtained under the pressure of fundamental information processing phenomena.
ACKNOWLEDGEMENTS
We thank the anonymous reviewers for their thoughtful feedback, and for suggesting new experiments and relevant literature. Supported by ONR N00014-17-1-2072, ARO W911NF-17-1-0304, AFOSR FA9550-15-1-0229 and FA8650-11-1-7156.
A DETAILS OF THE EXPERIMENTS
A.1 ARCHITECTURES AND TRAINING
In all of the experiments, unless otherwise stated, we use the following All-CNN architecture, adapted from Springenberg et al. (2014):
conv 96 - conv 96 - conv 192 s2 - conv 192 - conv 192 - conv 192 s2 - conv 192 - conv1 192 - conv1 10 - avg. pooling - softmax
where each conv block consists of a 3× 3 convolution, batch normalization and ReLU activations. conv1 denotes a 1 × 1 convolution. The network is trained with SGD, with a batch size of 128, learning rate starting from 0.05 and decaying smoothly by a factor of .97 at each epoch. We also use weight decay with coefficient 0.001. In the experiments with a fixed learning rate, we fix the learning rate to 0.001, which we find to allow convergence without excessive overfitting. For the ResNet experiments, we use the ResNet-18 architecture from He et al. (2016) with initial learning rate 0.1, learning rate decay .97 per epoch, and weight decay 0.0005. When training with Adam, we use a learning rate of 0.001 and weight decay 0.0001.
When experimenting with varying network depths, we use the following architecture:
conv 96 - [conv 96 · 2i−1 - conv 96 · 2i s2]ni=1 - conv 96 · 2n - conv1 96 · 2n - conv1 10
In order to avoid interferences between the annealing scheme and the architecture, in these experiments we fix the learning rate to 0.001.
The Fully Connected network used for the MNIST experiments has hidden layers of size [2500, 2000, 1500, 1000, 500]. All hidden layers use batch normalization followed by ReLU activations. We fix the learning rate to 0.005. Weight decay is not used. We use data augmentation with random translations up to 4 pixels and random horizontal flipping. For MNIST, we pad the images with zeros to bring them to size 32× 32.
A.2 APPROXIMATIONS OF THE FISHER INFORMATION MATRIX
To compute the trace of the Fisher Information Matrix, we use the following expression derived directly from the definition:
tr(F ) = Ex∼Q̂(x)Ey∼pw(y|x)[tr(∇w log pw(y|x)∇w log pw(y|x)T )] = Ex∼Q̂(x)Ey∼pw(y|x)[‖∇w log pw(y|x)‖2],
where the input image x is sampled from the dataset, while the label y is sampled from the output posterior. Expectations are approximated by Monte-Carlo sampling. Notice, however, that this expression depends only on the local gradients of the loss with respect to the weights at a point w = w0, so it can be noisy when the loss landscape is highly irregular. This is not a problem for ResNets Li et al. (2018), but for other architectures we use instead a different technique, proposed in Achille & Soatto (2018). More in detail, let L(w) be the standard cross-entropy loss. Given the current weights w0 of the network, we find the diagonal matrix Σ that minimizes:
L′ = Ew∼N(w0,Σ)[L(w)]− β log |Σ|,
where β is a parameter that controls the smoothness of the approximation. Notice that L′ can be minimized efficiently using the method in Kingma et al. (2015). To see how this relates to the Fisher Information Matrix, assume that L(w) can be approximated locally in w0 as L(w) = L0 + a · w + w ·Hw. We can then rewrite L′ as
L′ = L0 + tr(ΣH)− β log |Σ|.
Taking the derivative with respect to Σ, and setting it to zero, we obtain Σii = β/Hii. We can then use Σ to estimate the trace of the Hessian, and hence of the Fisher information.
A.3 CURVE FITTING
Fitting of sensitivity curves and synaptic density profiles from the literature was performed using:
f(t) = e−(t−d)/τ1 − ke−(t−d)/τ2
as the fitting equation, where t is the age at the time of sampling and τ1, τ2, k and d are unconstrained parameters (Banks et al., 1975).
The exponential fit of the sensitivity to the Fisher Information trace uses the expression
F (t) = a exp(cSk(t)) + b,
where a, b and c are unconstrained parameters, F (t) is the Fisher Information trace at epoch t of the training of a network without deficits and Sk is the sensitivity computed using a window of size k. That is, Sk(t) is the increase in the final test error over a baseline when the network is trained in the presence of a deficit between epochs t and t+ k.
B ADDITIONAL PLOTS
No deficit Blurred up to epoch 100
Always blurred
C EXPERIMENTAL DESIGN AND COMPARISON WITH ANIMAL MODELS
Critical periods are task- and deficit-specific. The specific task we address is visual acuity, but the performance is necessarily measured through different mechanisms in animals and Artificial Neural Networks. In animals, visual acuity is traditionally measured by testing the ability to discriminate between black-and-white contrast gratings (with varying spatial frequency) and a uniform gray field. The outcome of such tests generally correlates well with the ability of the animal to use the eye to solve other visual tasks relying on acuity. Convolutional Neural Networks, on the other hand, have a very different sensory processing mechanism (based on heavily quantized data), which may trivialize such a test. Rather, we directly measure the performance of the network on an high-level task, specifically image classification, for which CNNs are optimized.
We chose to simulate cataracts in our DNN experiments, a deficit which allows us to explore its complex interactions with the structure of the data and the architecture of the network. Unfortunately, while the overall trends of cataract-induced critical periods have been studied and understood in animal models, there is not enough data to confidently regress sensibility curves comparable to those obtained in DNNs. For this reason, in Figure 1 we compare the performance loss in a DNN trained in the presence of a cataract-like deficit with the results obtained from monocularly deprived kittens, which exhibit similar trends and are one of the most common experimental paradigms in the visual neurosciences.
Simulating complete visual deprivation in a neural network is not as simple as feeding a constant stimulus: a network presented with a constant blank input will rapidly become trivial and thus unable to train on new data. This is to be expected, since a blank input is a perfectly predictable stimulus and thus the network can quickly learn the (trivial) solution to the task. We instead wanted to model an uninformative stimulus, akin to noise. Moreover, even when the eyes are sutured or maintained in the darkness, there will be background excitation of photoreceptors that is best modeled as noise. To account for this, we simulate sensory deprivation by replacing the input images with a dataset composed of (uninformative) random Gaussian noise. This way the network is trained on solving the highly non-trivial task of memorizing the association between the finitely-many noise patterns and their corresponding labels. | 1. What is the significance of the phenomenon identified in deep neural networks regarding critical periods?
2. How did the authors illustrate the parallelism between critical periods in biological and artificial systems?
3. What insights did the authors obtain from analyzing the Fisher Information Matrix (FIM)?
4. What is the impact of pretraining on the performance of the network, and how can freezing certain layers help "reopen" the critical period?
5. How do changes in the optimizer or regularizer affect the presence of the critical period and the deficit caused by sensory deprivation?
6. How does the concept of forgetting relate to the second learning phase, and what happens when a standard regularizer like weight decay is introduced?
7. Is there compatibility between the results presented in this paper and the view in neuroscience regarding the opening of the critical period window being mechanistically mediated by inhibition? | Review | Review
The authors analyze the learning dynamics in deep neural networks and identify an intriguing phenomenon that reflects what in biological learning is known as critical period: a relatively short time window early in post-natal development where organisms become particularly sensitive to particular changes in experience. The importance of critical periods in biology is due to the fact that specific types of perturbations to the input statistic can cause deficits in performance which can be permanent in the sense that later training cannot rescue them.
The authors did a great job illustrating the parallelism between critical periods in biological neural systems and the analogous phenomenon in artificial deep neural networks. Essentially, they showed that blurring the input samples of the cifar10 dataset during the initial phase of training had an effect that is very reminiscent of the result of sensory deprivation during the critical periods of visual learning in mammals, resulting in a long-term impairments in visual object recognition that persists even if blurring is removed later in training. The authors go as far as characterizing the effects of the length of the "sensory deprivation" window and its onset during training, and comparing the results to classic neuroscience monocular deprivation experiments in kittens, pointing out very striking phenomenological similarities.
Next, the authors establish a connection between critical periods in deep neural networks and the amount of information that the weights of the trained model contain about the task by looking at the Fisher Information Matrix (FIM). With this method they obtain a host of interesting insights. One insight is that there are two phases in learning: an initial one where the trace of the FIM grows together with a rapid increase in classification accuracy, and a second one where accuracy keeps slightly increasing, but Fisher Information trace globally decreases. They then go into detail and look at how this quantity evolves within individual layers of the deep learning architecture, revealing that the deficit caused by the blurring perturbation during the early epochs training is accompanied by larger FIM trace in the last layers of the architecture at the expense of the intermediate layers.
Besides the fact that deep neural network exhibit critical periods, another important result of this work is the demonstration that pretraining, if done inappropriately can actually be deleterious to the performance of the network.
This paper is insightful, and interesting. The conceptual and experimental part of the paper is very clearly presented, and the methodology is very appropriate to tease apart some of the mechanisms underlying the basic phenomenological observations. Here are some detailed questions meant to elucidate some points that are still unclear.
- Presumably, early training on blurred images prevents the initial conv filters from learning to discriminate high-frequency components (first of all, is this true?). The crucial phenomenon pointed out by the authors is that, even after removing the blur, the lower convolutions aren't able to recover and learn the high-frequency components. In fact, the high FIM trace in the latest layers could be due to the fact that they're trying to compensate for the lack of appropriate low-level feature extractors by composing low-frequency filters so as "build" high-frequency ones. If this makes sense, one would assume that freezing the last layers and only maintaining plasticity in the lower ones could be a way of "reopening" the critical period. Is that indeed the case?
- The authors show that their main results are robust to changes in the learning rate annealing schedule. However, it is not clear how changing the optimizer might affect the presence of the critical period. What would happen for instance using Adam or another optimization procedure that relies on the normalization of the gradient?
- On a related note, the authors point out the importance of forgetting, in particular as the main mechanism behind the second learning phase. They also point out that the deficit in learning the task after sensory deprivation is accompanied by large FIM trace in the last layers. What would happen in the presence of a standard regularizer like weight decay? Assuming that large FIM trace in the last layers is correlated with large weighs, that might mitigate the negative effect of early sensory deprivation.
- In neuroscience the opening of the critical period window if thought to be mechanistically mediated by the maturation of inhibition. Is that view compatible with the results presented in this paper? This is sort of complementary to the FIM analysis, since is mostly about net average input to a neuron, i.e. about the information contained in the activations, rather than the weights. |
ICLR | Title
Critical Learning Periods in Deep Networks
Abstract
Similar to humans and animals, deep artificial neural networks exhibit critical periods during which a temporary stimulus deficit can impair the development of a skill. The extent of the impairment depends on the onset and length of the deficit window, as in animal models, and on the size of the neural network. Deficits that do not affect low-level statistics, such as vertical flipping of the images, have no lasting effect on performance and can be overcome with further training. To better understand this phenomenon, we use the Fisher Information of the weights to measure the effective connectivity between layers of a network during training. Counterintuitively, information rises rapidly in the early phases of training, and then decreases, preventing redistribution of information resources in a phenomenon we refer to as a loss of “Information Plasticity”. Our analysis suggests that the first few epochs are critical for the creation of strong connections that are optimal relative to the input data distribution. Once such strong connections are created, they do not appear to change during additional training. These findings suggest that the initial learning transient, under-scrutinized compared to asymptotic behavior, plays a key role in determining the outcome of the training process. Our findings, combined with recent theoretical results in the literature, also suggest that forgetting (decrease of information in the weights) is critical to achieving invariance and disentanglement in representation learning. Finally, critical periods are not restricted to biological systems, but can emerge naturally in learning systems, whether biological or artificial, due to fundamental constrains arising from learning dynamics and information processing.
1 INTRODUCTION
Critical periods are time windows of early post-natal development during which sensory deficits can lead to permanent skill impairment (Kandel et al., 2013). Researchers have documented critical periods affecting a range of species and systems, from visual acuity in kittens (Wiesel & Hubel, 1963b; Wiesel, 1982) to song learning in birds (Konishi, 1985). Uncorrected eye defects (e.g., strabismus, cataracts) during the critical period for visual development lead to amblyopia in one in fifty adults.
The cause of critical periods is ascribed to the biochemical modulation of windows of neuronal plasticity (Hensch, 2004). In this paper, however, we show that deep neural networks (DNNs), while completely devoid of such regulations, respond to sensory deficits in ways similar to those observed in humans and animal models. This surprising result suggests that critical periods may arise from information processing, rather than biochemical, phenomena.
We propose using the information in the weights, measured by an efficient approximation of the Fisher Information, to study critical period phenomena in DNNs. We show that, counterintuitively, the information in the weights does not increase monotonically during training. Instead, a rapid growth in information (“memorization phase”) is followed by a reduction of information (“reorganization” or “forgetting” phase), even as classification performance keeps increasing. This behavior is consistent across different tasks and network architectures. Critical periods are centered in the memorization phase.
∗These authors contributed equally to this work.
Published as a conference paper at ICLR 2019
Under review as a conference paper at ICLR 2019
Figure 1: DNNs exhibit critical periods. (A) Final accuracy achieved by a CNN trained with a cataract-like deficit as a function of the training epoch N at which deficit is removed (solid line). Performance is permanently impaired if the deficit is not corrected early enough, regardless of how much additional training is performed. As in animal models, critical periods coincide with the early learning phase during which test accuracy would rapidly increase in the absence of deficits (dashed). (B) For comparison, we report acuity for kittens monocularly deprived since birth and tested at the time of eye-opening (solid), and normal development visual acuity in kittens as a function of age (dashed) (Giffin & Mitchell, 1978; Mitchell, 1988).
artificial neural networks (ANNs) are only loosely inspired by biological systems (Hassabis et al., 2017).
Most studies to date have focused either on the behavior of networks at convergence (Representation
Learning) or on the asymptotic properties of the numerical scheme used to get there (Optimization).
The role of the initial transient, especially its effect in biasing the network towards “good” regions
of the complex and high-dimensional optimization problem, is rarely addressed. To study this initial learning phase of ANNs, we replicate experiments performed in animal models and find that the responses to early deficits are remarkably similar, despite the large underlying differences between the two systems. In particular, we show that the quality of the solution depends only minimally on the final, relatively well-understood, phase of the training process or on its very first epochs; instead, it depends critically on the period prior to initial convergence.
In animals, sensory deficits introduced during critical periods induce changes in the architecture of the corresponding areas (Daw, 2014; Wiesel & Hubel, 1963a; Hendrickson et al., 1987). To determine whether a similar phenomenon exists in ANNs, we compute the Fisher Information of the weights of the network as a proxy to measure its “effective connectivity”, that is, the density of connections that are effectively used by the network in order to solve the task. Like others before us (Shwartz-Ziv & Tishby, 2017), we observe two distinct phases during the training, first a “learning phase” in which the Fisher Information of the weights increases as the network learns from the data, followed by a “consolidation” or “compression” phase in which the Fisher Information decreases and stabilizes. Sensitivity to critical-period-inducing deficits is maximal exactly when the Fisher Information peaks.
A layer-wise analysis of the network’s effective connectivity shows that, in the tasks and deficits we consider, the hierarchy of low-level and high-level features in the training data is a key aspect behind the observed phenomena. In particular, our experiments suggest that the existence of critical periods in deep neural networks depends on the inability of the network to change its effective connectivity pattern in order to process different information (in response to deficit removal). We call this phenomenon, which is not mediated by any external factors, a loss of the “Information Plasticity” of the network.
2 RELATED WORK
3 DEEP ARTIFICIAL NEURAL NETWORKS EXHIBIT CRITICAL PERIODS
A notable example of critical period-inducing deficit, which also commonly affects humans, is amblyopia (reduced visual acuity in one eye) caused unilateral cataracts during infancy or childhood (Vaegan & Taylor, 1979; von Noorden, 1981): Even after surgical correction of the cataracts, the
Under review as a conference paper at ICLR 2019
Figure 2: Sensitivity of learning phase: (C) Final test accuracy of a DNN as a function of the onset
of a short 40-epoch deficit. The decrease in the final performance can be used to measure the sensitivity to deficits. The most sensitive epochs corresponds to the early rapid learning phase, before the test error (dashed line) begins to plateau. Afterwards, the network is largely unaffected by the temporary deficit. (D) This can be compared with changes in the degree of functional disconnection (normalized numbers of V1 monocular cells disconnected from the contralateral eye) as a function of the kittens’ age at the onset of a 10-12-day deficit window (Olson & Freeman, 1980). Dashed lines are as in A and B respectively.
ability of the patients to regain normal acuity in the affected eye depends both on the duration of the deficit and on its age of onset, with earlier and longer deficits causing more severe effects.
In order to replicate this experimental setup in ANNs, we train a standard convolutional network (CNN) to classify objects in small 32 ⇥ 32 RGB images from the CIFAR-10 dataset (Krizhevsky & Hinton, 2009) in 10 classes. To simulate the effect of cataracts, for the first t0 epochs the images in the dataset are downsampled to 8⇥8 and then upsampled back to 32⇥32 using bilinear interpolation, in practice blurring the image and destroying small-scale details.1 After that, the training continues for 300 more epochs, giving the network enough time to converge and ensuring it is exposed to the same number of uncorrupted images as in the control (t0 = 0) experiment.
In Figure 1, we graph the final performance of the network (described in Materials and Methods) as a function of the epoch at which the deficit is corrected (t0).We clearly observe the existence of a
critical period for this deficit in the ANN: if the blur is not removed within the first 60 epochs, the
final performance is severely decreased when compared to the baseline (from a test error of ⇠6.4%,
[In this plot it is 8%, it is 6.4% for th resnet later. We can swap the plots or change the text] in the
absence of a deficit, to more than 18% when the blur is present over 140 epochs, a ⇠300% increase).
The profile of the curve is also strikingly similar to the one obtained in kittens monocularly deprived
from near birth and whose visual acuity upon eye-opening was tested and plotted against the length
of the eficit window (Mitchell, 1988). Just like in human and animal models (where critical
periods are characteristic of early development), the critical period in the DNN also arises during
the initial rapid learning phase. At this stage, the network is quickly learning a solution before the
test error plateaus and the longer asymptotic convergence phase begins.
Sensitivity to deficit. To quantify more accurately the sensitivity of the ANN to image blurring throughout its early learning phase, we introduced the deficit in a short constant window (40 epochs), starting at different epochs, and then measured the decrease in the ANN’s final performance compared to the baseline. In Figure 2, we plot the final testing error of the network against the epoch of onset of the deficit. We observe that the network’s sensitivity to blurring peaks in the central part of the early rapid learning phase (around 30 epochs), while later deficits produce little or no effect. A similar experiment was also performed kittens by Olson and Freeman, using a window of 10-12 days during which the animals were monocularly deprived and using it to “scan” the first 4 months after birth to obtain a sensitivity profile (Olson & Freeman, 1980).
We subsequently evaluated the effect of other training data modifications: a more drastic deprivation where the input is substituted with random noise, simulating complete sensory deprivation, and two “high-level” modifications of the training data: vertical flipping of the input image and permutation
1We employed this method, instead of a simpler Gaussian blur, since it has a very similar effect and makes the quantification of information loss clearer.
3
Figure 1: DNNs exhibit critical periods. (A) Final accuracy achieved by a CNN trained with a cataract-like deficit as a function of the training epoch N at which the deficit is removed (solid line). Performance is permanently impaired if the fici is no corrected early enough, r gardless of how much additional training is performed. As in animal models, critical periods coincide with the early learning phase during which, in the absence of deficits, test accuracy would rapidly increase (dashed). (B) For comparison, we r p rt acuity fo kittens monocularly d p ived since birth and tested at the time of eye-opening (solid), and normal visual acuity development (in kittens) as a function of their age (dashed) (Giffin & Mitchell, 1978; Mitchell, 1988). Sensitivity during learning: (C) Final test accuracy of a DNN as a function of the onset of a short 40-epoch deficit. The decrease in the final performance can be used to measure t e sensi ivity to defi ts. The most sensitive epochs corresponds to the early rapid learning phase, before the test error (dashed line) begins to plateau. Afterwards, the network is largely unaffected by the temporary deficit. (D) This can be compared with changes in the degr of functio al disconnection (normalized numbers of V1 monocular cells disconnected from the contralateral eye) as a function of the kittens’ age at the onset of a 10-12-day deficit window (Olson & Freeman, 1980). Dashed lines are as in A and B respectively, up to a re-scaling of the y-axis.
Our findings, described in Section 2, indicate that the early tra sient is critic l in determining the final solution of the optimization associated with training an artificial neural network. In particular, the effects of sensory deficits during a critical period cannot be overcome, no matter how much additional training is performed. Yet most theoretical studies have focused on the network behavior after converge ce (Representation Learning) or on the asymptotic properties of the optimization scheme used for training (SG ).
To study this early phase, in Section 3, we use the Fisher Information to quantify the effective connectivity of a network during training, and introduce the notion of Information Plasticity in learning. Information Plasticity is maximal during the memorization phase, and decreases in the reorganization phase. We show that deficit sensitivity during critical periods correlates strongly with the effective connectivity.
In Section 4 we discuss our contribution in relation to previous work. When considered in conjunction with recent results on representation learning (Achille & Soatto, 2018), our findings indicate that forgetting (reducing information in the weights) is critical to achieving invariance to nuisance variability as well as independence of the components of the representation, but comes at the price of reduced adaptability later in the training. We also hypothesize that the loss of physical connectivity in biology (neural plasticity) could be a consequence, rather than a cause, of the loss of Information Plasticity, which depends on how the information is distributed throughout a network during the early stages of learning. These results also shed light on the common practice of pre-training a model on a task and then fine-tune it for another, one of the most rudimentary forms of transfer learning. Our experiments show that, rather than helpful, pre-training can be detrimental, even if the tasks are similar (e.g., same labels, slightly blurred images).
2 EXPERIMENTS
notable e a l f iti related deficit, commonly affecting humans, is amblyopia (reduced visual a uity in one eye) caused b cat racts during infancy or childhood (Taylor et al., 1979;
2
von Noorden, 1981). Even after surgical correction of cataracts, the ability of the patients to regain normal acuity in the affected eye depends both on the duration of the deficit and on its age of onset, with earlier and longer deficits causing more severe effects. In this section, we aim to study the effects of similar deficits in DNNs. To do so, we train a standard All-CNN architecture based on Springenberg et al. (2014) (see Appendix A) to classify objects in small 32 × 32 images from the CIFAR-10 dataset (Krizhevsky & Hinton, 2009). We train with SGD using an exponential annealing schedule for the learning rate. To simulate the effect of cataracts, for the first t0 epochs the images in the dataset are downsampled to 8 × 8 and then upsampled back to 32 × 32 using bilinear interpolation, in practice blurring the image and destroying small-scale details.1 After that, the training continues for 160 more epochs, giving the network time to converge and ensuring it is exposed to the same number of uncorrupted images as in the control (t0 = 0) experiment.
DNNs exhibit critical periods: In Figure 1, we plot the final performance of a network affected by the deficit as a function of the epoch t0 at which the deficit is corrected. We can readily observe the existence of a critical period: If the blur is not removed within the first 40-60 epochs, the final performance is severely decreased when compared to the baseline (up to a threefold increase in error). The decrease in performance follows trends commonly observed in animals, and may be qualitatively compared, for example, to the loss of visual acuity observed in kittens monocularly deprived from birth as a function of the length of the deficit (Mitchell, 1988).2
We can measure more accurately the sensitivity to a blur deficit during learning by introducing the deficit in a short window of constant length (40 epochs), starting at different epochs, and then measure the decrease in the DNN’s final performance compared to the baseline (Figure 1). Doing this, we observe that the sensitivity to the deficit peaks in the central part of the early rapid learning phase (at around 30 epochs), while introducing the deficit later produces little or no effect. A similar experiment performed on kittens, using a window of 10-12 days during which the animals are monocularly deprived, again shows a remarkable similarity between the profiles of the sensitivity curves (Olson & Freeman, 1980).
High-level deficits are not associated with a critical period: A natural question is whether any change in the input data distribution will have a corresponding critical period for learning. This is not the case for neuronal networks, which remain plastic enough to adapt to high-level changes in sensory processing (Daw, 2014). For example, it is well-reported that even adult humans can rapidly adapt to certain drastic changes, such as the inversion of the visual field (Stratton, 1896; Kohler, 1964). In Figure 2, we observe that DNNs are also largely unaffected by high-level deficits – such as vertical flipping of the image, or random permutation of the output labels: After deficit correction, the network quickly recovers its baseline performance. This hints at a finer interplay between the
1We employed this method, instead of a simpler Gaussian blur, since it has a very similar effect and makes the quantification of information loss clearer.
2See Appendix C for details on how to compare different models and deficits.
structure of the data distribution and the optimization algorithm, resulting in the existence of a critical period.
Sensory deprivation: We now apply to the network a more drastic deficit, where each image is replaced by white noise. Figure 2 shows hows this extreme deficit exhibits a remarkably less severe effect than the one obtained by only blurring images: Training the network with white noise does not provide any information on the natural images, and results in milder effects than those caused by a deficit (e.g., image blur), which instead conveys some information, but leads the network to (incorrectly) learn that no fine structure is present in the images. A similar effect has been observed in animals, where a period of early sensory deprivation (dark-rearing) can lengthen the critical period and thus cause less severe effects than those documented in light-reared animals (Mower, 1991). We refer the reader to Appendix C for a more detailed comparison between sensory deprivation and training on white noise.
Architecture, depth, and learning rate annealing: Figure 3 shows that a fully-connected network trained on the MNIST digit classification dataset also shows a critical period for the image blur deficit. Therefore, the convolutional structure is not necessary, nor is the use of natural images. Similarly, a ResNet-18 trained on CIFAR-10 also has a critical period, which is also remarkably sharper than the one found in a standard convolutional network (Figure 1). This is especially interesting, since ResNets allow for easier backpropagation of gradients to the lower layers, thus suggesting that the critical period is not caused by vanishing gradients. However, Figure 2 (Right) shows that the presence of a critical period does indeed depend critically on the depth of the network. In Figure 3, we confirm that a critical period exists even when the network is trained with a constant learning rate, and therefore cannot be explained by an annealed learning rate in later epochs.
Optimization method and weight decay: Figure 3 (Bottom Right) shows that when using Adam as the optimization scheme, which renormalizes the gradients using a running mean of their first two moments, we still observe a critical period similar to that of standard SGD. However, changing the
hyperparameters of the optimization can change the shape of the critical period: In Figure 3 (Bottom Left) we show that increasing weight decay makes critical periods longer and less sharp. This can be explained as it both slows the convergence of the network, and it limits the ability of higher layers to change to overcome the deficit, thus encouraging lower layers to also learn new features.
3 FISHER INFORMATION ANALYSIS
We have established empirically that, in animals and DNNs alike, the initial phases of training are critical to the outcome of the training process. In animals, this strongly relates to changes in the brain architecture of the areas associated with the deficit (Daw, 2014). This is inevitably different in artificial networks, since their connectivity is formally fixed at all times during training. However, not all the connections are equally useful to the network: Consider a network encoding the approximate posterior distribution pw(y|x), parameterized by the weights w, of the task variable y given an input image x. The dependency of the final output from a specific connection can be estimated by perturbing the corresponding weight and looking at the magnitude of the change in the final distribution. Specifically, given a perturbation w′ = w + δw of the weights, the discrepancy between the pw(y|x) and the perturbed network output pw′(y|x) can be measured by their KullbackLeibler divergence, which, to second-order approximation, is given by:
Ex KL( pw′(y|x) ‖ pw(y|x) ) = δw · Fδw + o(δw2),
where the expectation over x is computed using the empirical data distribution Q̂(x) given by the dataset, and F := Ex∼Q̂(x)Ey∼pw(y|x)[∇w log pw(y|x)∇w log pw(y|x)T ] is the Fisher Information Matrix (FIM). The FIM can thus be considered a local metric measuring how much the perturbation of a single weight (or a combination of weights) affects the output of the network (Amari & Nagaoka, 2000). In particular, weights with low Fisher Information can be changed or “pruned” with little effect on the network’s performance. This suggests that the Fisher Information can be used as a measure of the effective connectivity of a DNN, or, more generally, of the “synaptic strength” of a connection (Kirkpatrick et al., 2017). Finally, the FIM is also a semidefinite approximation of the Hessian of the loss function (Martens, 2014) and hence of the curvature of the loss landscape at a particular pointw during training, providing an elegant connection between the FIM and the optimization procedure (Amari & Nagaoka, 2000), which we will also employ later.
Unfortunately, the full FIM is too large to compute. Rather, we use its trace to measure the global or layer-wise connection strength, which we can compute efficiently using (Appendix A):
tr(F ) = Ex∼Q̂(x)Ey∼pw(y|x)[‖∇w log pw(y|x)‖2]. In order to capture the behavior of the off-diagonal terms, we also tried computing the logdeterminant of the full matrix using the Kronecker-Factorized approximation of Martens & Grosse (2015), but we observed the same qualitative trend as the trace. Since the FIM is a local measure, it is very sensitive to the irregularities of the loss landscape. Therefore, in this section we mainly use ResNets, which have a relatively smooth landscape (Li et al., 2018). For other architectures we use instead a more robust estimator of the FIM based on the injection of noise in the weights (Achille & Soatto, 2018), also described in Appendix A.
Two phases of learning: As its name suggests, the FIM can be thought as a measure of the quantity of information about the training data that is contained in the model (Fisher, 1925). Based on this, one would expect the overall strength of the connections to increase monotonically as we acquire information from experience. However, this is not the case: While during an initial phase the network acquires information about the data, which results in a large increase in the strength of the connections, once the performance in the task begins to plateau, the network starts decreasing the overall strength of its connections. However, this does not correspond to a reduction in performance, rather, performance keeps slowly improving. This can be seen as a “forgetting, or “compression” phase, during which redundant connections are eliminated and non-relevant variability in the data is discarded. It is well-established how the elimination (“pruning”) of unnecessary synapses is a fundamental process during learning and brain development (Rakic et al., 1986) (Figure 4, Center); in Figure 4 (Left) an analogous phenomenon is clearly and quantitatively shown for DNNs.
Strikingly, these changes in the connection strength are closely related to the sensitivity to criticalperiod-inducing deficits such as image blur, computed using the “sliding window” method as in Figure 1. In Figure 4 we see that the sensitivity closely follows the trend of the FIM. This is remarkable since the FIM is a local quantity computed at a single point during the training of a network in the absence of deficit, while sensitivity during a critical period is computed, using test data, at the end of the impaired network training. Figure 4 (Right) further emphasizes the effect of deficits on the FIM: in the presence of a deficit, the FIM grows and remains substantially higher even after the deficit is removed. This may be attributed to the fact that, when the data are so corrupted that classification is impossible, the network is forced to memorize the labels, therefore increasing the quantity of information needed to perform the same task.
Layer-wise effects of deficits: A layer-wise analysis of the FIM sheds further light on how the deficit affects the network. When the network (in this case All-CNN, which has a clearer division among layers than ResNet) is trained without deficits, the most important connections are in the intermediate layers (Figure 5, Left), which can process the input CIFAR-10 image at the most informative intermediate scale. However, if the network is initially trained on blurred data (Figure 5, top right), the strength of the connections is dominated by the top layer (Layer 6). This is to be expected, since the low-level and mid-level structures of the images are destroyed, making the lower layers ineffective. However, if the deficit is removed early in the training (Figure 5, top center), the network manages to “reorganize”, reducing the information contained in the last layer, and, at the same time, increasing the information in the intermediate layers. We refer to these phenomena as changes in “Information Plasticity”. If, however, the data change occurs after the consolidation phase, the network is unable to change its effective connectivity: The connection strength of each layer remains substantially constant. The network has lost its Information Plasticity and is past its critical period.
Critical periods as bottleneck crossings: The analysis of the FIM also sheds light on the geometry of the loss function and the learning dynamics. Since the FIM can be interpreted as the local curvature of the residual landscape, Fig. 4 shows that learning entails crossing bottlenecks: In the initial phase the network enters regions of high curvature (high Fisher Information), and once consolidation begins, the curvature decreases, allowing it to cross the bottleneck and enter the valley below. If the statistics change after crossing the bottleneck, the network is trapped. In this interpretation, the early phases of convergence are critical in leading the network towards the “right” final valley. The end of critical periods comes after the network has crossed all bottlenecks (and thus learned the features) and entered a wide valley (region of the weight space with low curvature, or low Fisher Information).
4 DISCUSSION AND RELATED WORK
Critical periods have thus far been considered an exclusively biological phenomenon. At the same time, the analysis of DNNs has focused on asymptotic properties and neglected the initial transient behavior. To the best of our knowledge, we are the first to show that artificial neural networks exhibit critical period phenomena, and to highlight the critical role of the transient in determining the asymptotic performance of the network. Inspired by the role of synaptic connectivity in modulating critical periods, we introduce the use of Fisher Information to study this initial phase. We show that the initial sensitivity to deficits closely follows changes in the FIM, both global, as the network first rapidly increases and then decreases the amount of stored information, and layer-wise, as the network “reorganizes” its effective connectivity in order to optimally process information.
Our work naturally relates to the extensive literature on critical periods in biology. Despite artificial networks being an extremely reductionist approximation of neuronal networks, they exhibit behaviors that are qualitatively similar to the critical periods observed in human and animal models. Our information analysis shows that the initial rapid memorization phase is followed by a loss of Information Plasticity which, counterintuitively, further improves the performance. On the other hand, when combined with the analysis of Achille & Soatto (2018) this suggests that a “forgetting” phase may be desirable, or even necessary, in order to learn robust, nuisance-invariant representations.
The existence of two distinct phases of training has been observed and discussed by Shwartz-Ziv & Tishby (2017), although their analysis builds on the (Shannon) information of the activations, rather than the (Fisher) information in the weights. On a multi-layer perceptron (MLP), Shwartz-Ziv & Tishby (2017) empirically link the two phases to a sudden increase in the gradients’ covariance. It may be tempting to compare these results with our Fisher Information analysis. However, it must be noted that the FIM is computed using the gradients with respect to the model prediction, not to the ground truth label, leading to important qualitative differences. In Figure 6, we show that the covariance and norm of the gradients exhibit no clear trends during training with and without deficits, and, therefore, unlike the FIM, do not correlate with the sensitivity to critical periods. However,
a connection between our FIM analysis and the information in the activations can be established based on the work of Achille & Soatto (2018), which shows that the FIM of the weights can be used to bound the information in the activations. In fact, we may intuitively expect that pruning of connections naturally leads to loss of information in the corresponding activations. Thus, our analysis corroborates and expands on some of the claims of Shwartz-Ziv & Tishby (2017), while using an independent framework.
Aside from being more closely related to the deficit sensitivity during critical periods, Fisher’s Information also has a number of technical advantages: Its diagonal is simple to estimate, even on modern state-of-the-art architectures and compelling datasets, and it is less sensitive to the choice estimator of mutual information, avoiding some of the common criticisms to the use of information quantities in the analysis of deep learning models. Finally, the FIM allows us to probe fine changes in the effective connectivity across the layers of the network (Figure 5), which are not visible in Shwartz-Ziv & Tishby (2017).
A complete analysis of the activations should account not only for the amount of information (both task- and nuisance-related), but also for its accessibility, e.g., how easily task-related information can be extracted by a linear classifier. Following a similar idea, Montavon et al. (2011) aim to study the layer-wise, or “spatial” (but not temporal) evolution of the simplicity of the representation by performing a principal component analysis (PCA) of a radial basis function (RBF) kernel embedding of each layer representation. They show that, on a multi-layer perceptron, task-relevant information increasingly concentrate on the first principal components of the representation’s embedding, implying that they become more easily “accessible” layer after layer, while nuisance information (when it is codified at all) is encoded in the remaining components. In our work we instead focus on the temporal evolution of the weights. However, it’s important to notice that a network with simpler weights (as measured by the FIM) also requires a simpler smooth representation (as measured, e.g., by the RBF embedding) in order to operate properly, since it needs to be resistant to perturbations of the weights. Thus our analysis is wholly compatible with the intuitions of Montavon et al. (2011). It would also be interesting to study the joint spatio-temporal evolution of the network using both frameworks at once.
One advantage of focusing on the information of the weights rather than on the activations, or behavior of the network, is to have a readout of the “effective connectivity” during critical periods, which can be compared to similar readouts in animals. In fact, “behavioral” readouts upon deficit removal, both in artificial and neuronal networks, can potentially be confounded by deficit-coping changes at different levels of the visual pathways (Daw, 2014; Knudsen, 2004). On the other hand, deficits in deprived animals are mirrored by abnormalities in the circuitry of the visual pathways, which we characterize in DNNs using the FIM to study its “effective connectivity”, i.e., the connections that are actually employed by the network to solve the task. Sensitivity to critical periods and the trace of the Fisher Information peak at the same epochs, in accord with the evidence that skill development and critical periods in neuronal networks are modulated by changes (generally experience-dependent) in synaptic plasticity (Knudsen, 2004; Hensch, 2004). Our layer-wise analysis of the Fisher Information (Figure 5) also shows that visual deficits reinforce higher layers to the detriment of intermediate layers, leaving low-level layers virtually untouched. If the deficit is removed after the critical period ends, the network is not able to reverse these effects. Although the two systems are radically different, a similar response can be found in the visual pathways of animal models: Lower levels (e.g., retina, lateral geniculate nucleus) and higher-level visual areas (e.g., V2 and post-V2) show little remodeling upon deprivation, while most changes happen in different layers of V1 (Wiesel & Hubel, 1963a; Hendrickson et al., 1987).
An insightful interpretation of critical periods in animal models was proposed by Knudsen (2004): The initial connections of neuronal networks are unstable and easily modified (highly plastic), but as more “samples” are observed, they change and reach a more stable configuration which is difficult to modify. Learning can, however, still happen within the newly created connectivity pattern. This is largely compatible with our findings: Sensitivity to critical-period-inducing deficits peaks when connections are remodeled (Figure 4, Left), and different connectivity profiles are observed in networks trained with and without a deficit (Figure 5). Moreover, high-level deficits such as imageflipping and label permutation, which do not require restructuring of the network’s connections in order to be corrected, do not exhibit a critical period.
Applying a deficit at the beginning of the training may be compared to the common practice of pretraining, which is generally found to improve the performance of the network. Erhan et al. (2010) study the somewhat related, but now seldom used, practice of layer-wise unsupervised pre-training, and suggest that it may act as a regularizer by moving the weights of the network towards an area of the loss landscape closer to the attractors for good solutions, and that early examples have a stronger effect in steering the network towards particular solutions. Here, we have shown that pre-training on blurred data can have the opposite effect; i.e., it can severely decrease the final performance of the network. However, in our case, interpreting the deficits effect as moving the network close to a bad attractor is difficult to reconcile with the smooth transition observed in the critical periods, since the network would either converge to this attractor, and thus have low accuracy, or escape completely.
Instead, we reconcile our experiments with the geometry of the loss function by introducing a different explanation based on the interpretation of the FIM as an approximation of the local curvature. Figure 4 suggests that SGD encounters two different phases during the network training: At first, the network moves towards high-curvature regions of the loss landscape, while in the second phase the curvature decreases and the network eventually converges to a flat minimum (as observed in Keskar et al. (2017)). We can interpret these as the network crossing narrow bottlenecks during its training in order to learn useful features, before eventually entering a flat region of the loss surface once learning is completed and ending up trapped there. When combining this assumption with our deficit sensitivity analysis, we can hypothesize that the critical period occurs precisely upon crossing of this bottleneck. It is also worth noticing how there is evidence that convergence to flat minima (minima with low curvature) in a DNN correlates with a good generalization performance (Hochreiter & Schmidhuber, 1997; Li et al., 2018; Chaudhari et al., 2017; Keskar et al., 2017). Indeed, using this interpretation, Figure 4 (Right) tells us that networks more affected by the deficit converge to sharper minima. However, we have also found that the performance of the network is already mostly determined during the early “sensitive” phase. The final sharpness at convergence may therefore be an epiphenomenon, rather than the cause of good generalization.
5 CONCLUSION
Our goal in this paper is not so much to investigate the human (or animal) brain through artificial networks, as to understand fundamental information processing phenomena, both in their biological or artificial implementations. It is also not our goal to suggest that, since they both exhibit critical periods, DNNs are necessarily a valid model of neurobiological information processing, although recent work has emphasized this aspect. We engage in an “Artificial Neuroscience” exercise in part to address a technological need to develop “explainable” artificial intelligence systems whose behavior can be understood and predicted. While traditionally well-understood mathematical models were used by neuroscientists to study biological phenomena, information processing in modern artificial networks is often just as poorly understood as in biology, so we chose to exploit well-known biological phenomena as probes to study information processing in artificial networks.
Conversely, it would also be interesting to explore ways to test whether biological networks prune connections as a consequences of a loss of Information Plasticity, rather than as a cause. The mechanisms underlying network reconfiguration during learning and development might be an evolutionary outcome obtained under the pressure of fundamental information processing phenomena.
ACKNOWLEDGEMENTS
We thank the anonymous reviewers for their thoughtful feedback, and for suggesting new experiments and relevant literature. Supported by ONR N00014-17-1-2072, ARO W911NF-17-1-0304, AFOSR FA9550-15-1-0229 and FA8650-11-1-7156.
A DETAILS OF THE EXPERIMENTS
A.1 ARCHITECTURES AND TRAINING
In all of the experiments, unless otherwise stated, we use the following All-CNN architecture, adapted from Springenberg et al. (2014):
conv 96 - conv 96 - conv 192 s2 - conv 192 - conv 192 - conv 192 s2 - conv 192 - conv1 192 - conv1 10 - avg. pooling - softmax
where each conv block consists of a 3× 3 convolution, batch normalization and ReLU activations. conv1 denotes a 1 × 1 convolution. The network is trained with SGD, with a batch size of 128, learning rate starting from 0.05 and decaying smoothly by a factor of .97 at each epoch. We also use weight decay with coefficient 0.001. In the experiments with a fixed learning rate, we fix the learning rate to 0.001, which we find to allow convergence without excessive overfitting. For the ResNet experiments, we use the ResNet-18 architecture from He et al. (2016) with initial learning rate 0.1, learning rate decay .97 per epoch, and weight decay 0.0005. When training with Adam, we use a learning rate of 0.001 and weight decay 0.0001.
When experimenting with varying network depths, we use the following architecture:
conv 96 - [conv 96 · 2i−1 - conv 96 · 2i s2]ni=1 - conv 96 · 2n - conv1 96 · 2n - conv1 10
In order to avoid interferences between the annealing scheme and the architecture, in these experiments we fix the learning rate to 0.001.
The Fully Connected network used for the MNIST experiments has hidden layers of size [2500, 2000, 1500, 1000, 500]. All hidden layers use batch normalization followed by ReLU activations. We fix the learning rate to 0.005. Weight decay is not used. We use data augmentation with random translations up to 4 pixels and random horizontal flipping. For MNIST, we pad the images with zeros to bring them to size 32× 32.
A.2 APPROXIMATIONS OF THE FISHER INFORMATION MATRIX
To compute the trace of the Fisher Information Matrix, we use the following expression derived directly from the definition:
tr(F ) = Ex∼Q̂(x)Ey∼pw(y|x)[tr(∇w log pw(y|x)∇w log pw(y|x)T )] = Ex∼Q̂(x)Ey∼pw(y|x)[‖∇w log pw(y|x)‖2],
where the input image x is sampled from the dataset, while the label y is sampled from the output posterior. Expectations are approximated by Monte-Carlo sampling. Notice, however, that this expression depends only on the local gradients of the loss with respect to the weights at a point w = w0, so it can be noisy when the loss landscape is highly irregular. This is not a problem for ResNets Li et al. (2018), but for other architectures we use instead a different technique, proposed in Achille & Soatto (2018). More in detail, let L(w) be the standard cross-entropy loss. Given the current weights w0 of the network, we find the diagonal matrix Σ that minimizes:
L′ = Ew∼N(w0,Σ)[L(w)]− β log |Σ|,
where β is a parameter that controls the smoothness of the approximation. Notice that L′ can be minimized efficiently using the method in Kingma et al. (2015). To see how this relates to the Fisher Information Matrix, assume that L(w) can be approximated locally in w0 as L(w) = L0 + a · w + w ·Hw. We can then rewrite L′ as
L′ = L0 + tr(ΣH)− β log |Σ|.
Taking the derivative with respect to Σ, and setting it to zero, we obtain Σii = β/Hii. We can then use Σ to estimate the trace of the Hessian, and hence of the Fisher information.
A.3 CURVE FITTING
Fitting of sensitivity curves and synaptic density profiles from the literature was performed using:
f(t) = e−(t−d)/τ1 − ke−(t−d)/τ2
as the fitting equation, where t is the age at the time of sampling and τ1, τ2, k and d are unconstrained parameters (Banks et al., 1975).
The exponential fit of the sensitivity to the Fisher Information trace uses the expression
F (t) = a exp(cSk(t)) + b,
where a, b and c are unconstrained parameters, F (t) is the Fisher Information trace at epoch t of the training of a network without deficits and Sk is the sensitivity computed using a window of size k. That is, Sk(t) is the increase in the final test error over a baseline when the network is trained in the presence of a deficit between epochs t and t+ k.
B ADDITIONAL PLOTS
No deficit Blurred up to epoch 100
Always blurred
C EXPERIMENTAL DESIGN AND COMPARISON WITH ANIMAL MODELS
Critical periods are task- and deficit-specific. The specific task we address is visual acuity, but the performance is necessarily measured through different mechanisms in animals and Artificial Neural Networks. In animals, visual acuity is traditionally measured by testing the ability to discriminate between black-and-white contrast gratings (with varying spatial frequency) and a uniform gray field. The outcome of such tests generally correlates well with the ability of the animal to use the eye to solve other visual tasks relying on acuity. Convolutional Neural Networks, on the other hand, have a very different sensory processing mechanism (based on heavily quantized data), which may trivialize such a test. Rather, we directly measure the performance of the network on an high-level task, specifically image classification, for which CNNs are optimized.
We chose to simulate cataracts in our DNN experiments, a deficit which allows us to explore its complex interactions with the structure of the data and the architecture of the network. Unfortunately, while the overall trends of cataract-induced critical periods have been studied and understood in animal models, there is not enough data to confidently regress sensibility curves comparable to those obtained in DNNs. For this reason, in Figure 1 we compare the performance loss in a DNN trained in the presence of a cataract-like deficit with the results obtained from monocularly deprived kittens, which exhibit similar trends and are one of the most common experimental paradigms in the visual neurosciences.
Simulating complete visual deprivation in a neural network is not as simple as feeding a constant stimulus: a network presented with a constant blank input will rapidly become trivial and thus unable to train on new data. This is to be expected, since a blank input is a perfectly predictable stimulus and thus the network can quickly learn the (trivial) solution to the task. We instead wanted to model an uninformative stimulus, akin to noise. Moreover, even when the eyes are sutured or maintained in the darkness, there will be background excitation of photoreceptors that is best modeled as noise. To account for this, we simulate sensory deprivation by replacing the input images with a dataset composed of (uninformative) random Gaussian noise. This way the network is trained on solving the highly non-trivial task of memorizing the association between the finitely-many noise patterns and their corresponding labels. | 1. What is the main contribution of the paper, and how does it relate to previous works in the field?
2. What are the strengths and weaknesses of the paper's empirical simulations, and how do they contribute to the overall understanding of artificial neural network learning?
3. How does the paper's use of Fisher Information impact the results, and are there any limitations to this approach?
4. What is the significance of Tishby's result (2017) and Montavon et al's work (2011) in relation to the paper's findings?
5. Does the paper provide a thorough analysis of the empirical findings, and are there any areas that could benefit from further exploration?
6. How does the paper's focus on empirical studies impact its overall impact and contributions to the field, and are there any potential drawbacks to this approach?
7. Are there any potential applications or future directions for research that arise from the paper's findings, and how might these be explored further? | Review | Review
The paper is interesting and I like it. I draws parallels from biological learning and the well known critical learning phases in biological systems to artificial neural network learning.
A series of empirical simulation experiments that all aim to disturb the learning process of the DNN and to artificially create criticality are presented. They are providing food for thought, in order to introduce some quantitative results, the authors use well known Fisher Information to measure the changes. So far so good and interesting.
I was disappointed to see Tishby's result (2017) only remotely discussed, an earlier work than the one by Tishby is by Montavon et al 2011 in JMLR. Also in this work properties of successive compression and dimensionality reduction are discussed, perhaps the starting point of quantitative analysis of various DNNs.
To this point the paper presents no theoretical contribution, rather empirical findings only, that may or may not be ubiquitous in DNN learning systems. The latter point may be worthwhile to discuss and analyse.
Overall, the paper is interesting with its nice empirical studies but stays somewhat superficial. To learn more a simpler toy model may be worthwhile to study. |
ICLR | Title
Constraint-based graph network simulator
Abstract
In the rapidly advancing area of learned physical simulators, nearly all methods train a forward model that directly predicts future states from input states. However, many traditional simulation engines use a constraint-based approach instead of direct prediction. Here we present a framework for constraint-based learned simulation, where a scalar constraint function is implemented as a trainable function approximator, and future predictions are computed as the solutions to a constraint satisfaction problem. We implement our method using a graph neural network as the constraint function and gradient descent as the constraint solver. The architecture can be trained by standard backpropagation. We test the model on a variety of challenging physical domains, including simulated ropes, bouncing balls, colliding irregular shapes and splashing fluids. Our model achieves better or comparable performance to top learned simulators. A key advantage of our model is the ability to generalize to more solver iterations at test time to improve the simulation accuracy. We also show how hand-designed constraints can be added at test time to satisfy objectives which were not present in the training data, which is not possible with forward approaches. Our constraint-based framework is applicable to any setting in which forward learned simulators are used, and more generally demonstrates key ways that learned models can leverage popular methods in numerical methods.
N/A
In the rapidly advancing area of learned physical simulators, nearly all methods train a forward model that directly predicts future states from input states. However, many traditional simulation engines use a constraint-based approach instead of direct prediction. Here we present a framework for constraint-based learned simulation, where a scalar constraint function is implemented as a trainable function approximator, and future predictions are computed as the solutions to a constraint satisfaction problem. We implement our method using a graph neural network as the constraint function and gradient descent as the constraint solver. The architecture can be trained by standard backpropagation. We test the model on a variety of challenging physical domains, including simulated ropes, bouncing balls, colliding irregular shapes and splashing fluids. Our model achieves better or comparable performance to top learned simulators. A key advantage of our model is the ability to generalize to more solver iterations at test time to improve the simulation accuracy. We also show how hand-designed constraints can be added at test time to satisfy objectives which were not present in the training data, which is not possible with forward approaches. Our constraint-based framework is applicable to any setting in which forward learned simulators are used, and more generally demonstrates key ways that learned models can leverage popular methods in numerical methods.
1 INTRODUCTION
Consider a bowling ball colliding with a bowling pin. You might explain this event as involving a pair of forces being generated, one which causes the pin to move, and the other which causes the ball to careen away with a different direction and speed. This kind of intuitive cause-and-effect approach is analogous to physical simulators that apply an explicit forward model to calculate a future state directly from the current one, such as when numerically integrating discretized equations of motion.
An alternative, but equally valid, way to explain the collision is in terms of constraint satisfaction: the ball and pin cannot occupy the same location at the same time, and their combined energies and momenta must be conserved, so the post-collision trajectories are the only way the future can unfold without violating these constraints. This constraint-based approach is analogous to physical simulators that use an implicit function to model a system of constraints over the current and future states, and which generate a prediction by searching for a future state that respects all constraints.
Both families of simulators—those based on explicit, forward functions versus those which define the dynamics implicitly, via constraints—are widely used in physics, engineering, and graphics. In principle they can model the same types of dynamics, however they differ in how their respective predictions are computed and in practice strike different trade-offs that determine why one or the other is preferred in different domains. For example, explicit methods are popular for large systems with (mostly) independent local effects whose space and time derivatives are relatively smooth, and their accuracy can often be increased by discretizing space and time more finely. Implicit approaches are often preferred for systems with strong interactions, such rigid and stiff dynamics, and more accurate solutions can often be found by using more sophisticated constraint solvers or by increasing the computational budget (e.g., solver iterations) allocated to searching for solutions. In machine learning (ML), there have been rapid advances recently in methods for learning to simulate complex dynamic processes, however almost all (e.g., Sanchez-Gonzalez et al. (2020); Pfaff et al. (2021)) have focused on explicit forward model approaches, with few exceptions (Yang et al., 2020).
Here we present a framework for learning to simulate complex dynamics via constraint satisfaction. Our “Constraint-based Graph Network Simulator” (C-GNS) defines a single scalar-valued constraint function that represents whether a future state satisfies the physical constraints, conditioned on the current and previous states. The constraint function is implemented as a Graph Neural Network (GNN) (Bronstein et al., 2017; Battaglia et al., 2018), which can model systems with rich compositional structure—multiple bodies, complex meshes, etc. To predict the next state via the constraint function’s implicit representation of the dynamics, a gradient-based solver finds a proposed state which satisfies the constraints. We train it through the solver by backpropagation. We also introduce a hybrid approach that proposes and refines the future state using an explicit iterative predictor, rather than solving for learned constraints.
We tested the C-GNS on a variety of challenging physical simulation domains generated by several different simulation engines: simulated rope, bouncing balls, and bouncing irregular rigid shapes (MuJoCo, Todorov et al. (2012)) and splashing fluids (Flex, Macklin et al. (2014)). We found that the C-GNS’s simulated rollouts were more accurate than a state-of-the-art Graph Net Simulator (GNS) (Sanchez-Gonzalez et al., 2020) with comparable number of parameters. At test time, the C-GNS could use additional solver iterations to improve its predictive accuracy, striking desired speed-accuracy trade-offs. It could also satisfy new, hand-designed constraints jointly alongside its learned constraints. Neither of these capabilities are possible in explicit forward-style approaches.
2 BACKGROUND AND RELATED WORK
Constraint solvers are central to many physics simulators. Most rigid-body and game engines use constraints to model joints, collision and contact (Baraff, 1994). They are used for limiting strain in realistic cloth simulation (Thomaszewski et al., 2009), and are a core component in Eulerian incompressible fluid solvers to solve for pressure (Chorin, 1967). Recently, position-based (Müller et al., 2007) and projective dynamics methods (Bouaziz et al., 2014) have become very popular for interactive simulation. These methods express dynamics purely as constraints, and can simulate a wide range of physical systems from rigids over soft-bodies to fluids (Macklin et al., 2014).
Machine learning methods for accelerating scientific simulation of complex systems, such as turbulence (Kochkov et al., 2021; Wang et al., 2020) and aerodynamics (Thuerey et al., 2020; Zhang et al., 2018), have grown rapidly in recent years. GNN-based learned simulators, in particular, are a very flexible approach which can model a wide range of systems, from articulated dynamics (Sanchez-Gonzalez et al., 2018) to particle-based physics (Mrowca et al., 2018; Li et al., 2019; Sanchez-Gonzalez et al., 2020) and mesh-based continuum systems (Pfaff et al., 2021; De Avila Belbute-Peres et al., 2020), and generalize well to unseen scenarios. Combining learning algorithms with principles from physics and numerical methods, such as auxiliary loss terms and rich inductive biases, can improve sample complexity, computational efficiency, and generalization (Wu et al., 2018; Karniadakis et al., 2021; Chen et al., 2018; Rubanova et al., 2019). Imposing Hamiltonian (Greydanus et al., 2019; Sanchez-Gonzalez et al., 2019; Chen et al., 2019) and Lagrangian (Lutter et al., 2019; Cranmer et al., 2020; Finzi et al., 2020) mechanics in learned simulators offers unique speed/accuracy tradeoffs and can preserve symmetries more effectively.
Recent methods have been proposed for learning constraint functions and solving them in a model’s forward pass (Duvenaud et al. (2020)’s “Deep Implicit Layers” tutorial is an excellent hands-on survey). Such models can play games (Amos & Kolter, 2017; Wang et al., 2019), optimize power flow (Donti et al., 2021), support robotic planning (Loula et al., 2020), and perform combinatorial optimization (Bartunov et al., 2020). Solvers such as gradient descent and Newton’s method are differentiable, and support training by backpropagation, but this can be computationally expensive, so approaches such as Deep Equilibrium Models (DEM) (Bai et al., 2019; 2020) use implicit differentiation to compute gradients only at the solution point.
Despite the popularity of constraint-based traditional simulators, only a single simulator which uses learned constraints has been reported (Yang et al., 2020). Their “Neural Projections” method, based on Goldenthal et al. (2007), iteratively proposes a future state with an Euler step, then projects the proposal onto a learned constraint manifold, implemented as a multilayer perceptron (MLP). Crucially, their constraint function only measures how much an individual state violates the learned constraints, and thus is not an implicit representation of the dynamics. It is suitable for quasi-static regimes, but not scenarios such as the elastic collisions in the bowling ball example described above.
Under review as a conference paper at ICLR 2021
<latexit sha1_base64="pfoxs8rsC/moP79HBUVBdU/MhtI=">AAAB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkUI9FLx4r2g9oQ9lsJ+3SzSbsboQS+hO8eFDEq7/Im//GbZuDtj4YeLw3w8y8IBFcG9f9dgobm1vbO8Xd0t7+weFR+fikreNUMWyxWMSqG1CNgktsGW4EdhOFNAoEdoLJ7dzvPKHSPJaPZpqgH9GR5CFn1FjpoTvwBuWKW3UXIOvEy0kFcjQH5a/+MGZphNIwQbXueW5i/Iwqw5nAWamfakwom9AR9iyVNELtZ4tTZ+TCKkMSxsqWNGSh/p7IaKT1NApsZ0TNWK96c/E/r5ea8NrPuExSg5ItF4WpICYm87/JkCtkRkwtoUxxeythY6ooMzadkg3BW315nbSvql6tWruvVRo3eRxFOINzuAQP6tCAO2hCCxiM4Ble4c0Rzovz7nwsWwtOPnMKf+B8/gDdJ42H</latexit>
<latexit sha1_base64="t5Z8j6uw1dDrul9BiPoeucvIxm0=">AAAB6HicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeiF48t2A9oQ9lsJ+3azSbsboQS+gu8eFDEqz/Jm//GbZuDtj4YeLw3w8y8IBFcG9f9dgobm1vbO8Xd0t7+weFR+fikreNUMWyxWMSqG1CNgktsGW4EdhOFNAoEdoLJ3dzvPKHSPJYPZpqgH9GR5CFn1FipqQflilt1FyDrxMtJBXI0BuWv/jBmaYTSMEG17nluYvyMKsOZwFmpn2pMKJvQEfYslTRC7WeLQ2fkwipDEsbKljRkof6eyGik9TQKbGdEzVivenPxP6+XmvDGz7hMUoOSLReFqSAmJvOvyZArZEZMLaFMcXsrYWOqKDM2m5INwVt9eZ20r6perVpr1ir12zyOIpzBOVyCB9dQh3toQAsYIDzDK7w5j86L8+58LFsLTj5zCn/gfP4A4OuM/g==</latexit>
<latexit sha1_base64="HJ/awZ5XFr1cDfruVAeJQcOIixk=">AAAB8HicbVDLSgNBEOz1GeMr6tHLYBA8hV0J6DHoxWME85BkCbOTSTJkZnaZ6RXCkq/w4kERr36ON//GSbIHTSxoKKq66e6KEiks+v63t7a+sbm1Xdgp7u7tHxyWjo6bNk4N4w0Wy9i0I2q5FJo3UKDk7cRwqiLJW9H4dua3nrixItYPOEl4qOhQi4FgFJ302B1RzNrTHvZKZb/iz0FWSZCTMuSo90pf3X7MUsU1Mkmt7QR+gmFGDQom+bTYTS1PKBvTIe84qqniNszmB0/JuVP6ZBAbVxrJXP09kVFl7URFrlNRHNllbyb+53VSHFyHmdBJilyzxaJBKgnGZPY96QvDGcqJI5QZ4W4lbEQNZegyKroQguWXV0nzshJUK9X7arl2k8dRgFM4gwsI4ApqcAd1aAADBc/wCm+e8V68d+9j0brm5TMn8Afe5w8R35CX</latexit>
<latexit sha1_base64="ajLqfCZov3o6FTnfb54h4xEyGI0=">AAAB9HicbVBNS8NAEJ3Ur1q/qh69LBZBEEoiBT0WvXisYGuhDWWz3bZLN5u4OymUkN/hxYMiXv0x3vw3btsctPXBwOO9GWbmBbEUBl332ymsrW9sbhW3Szu7e/sH5cOjlokSzXiTRTLS7YAaLoXiTRQoeTvWnIaB5I/B+HbmP064NiJSDziNuR/SoRIDwShaye+OKKbtrJfihZf1yhW36s5BVomXkwrkaPTKX91+xJKQK2SSGtPx3Bj9lGoUTPKs1E0Mjykb0yHvWKpoyI2fzo/OyJlV+mQQaVsKyVz9PZHS0JhpGNjOkOLILHsz8T+vk+Dg2k+FihPkii0WDRJJMCKzBEhfaM5QTi2hTAt7K2EjqilDm1PJhuAtv7xKWpdVr1at3dcq9Zs8jiKcwCmcgwdXUIc7aEATGDzBM7zCmzNxXpx352PRWnDymWP4A+fzB7oYkhM=</latexit>
<latexit sha1_base64="e4/cShLnpc3G1u2go1/xMgSgs0s=">AAAB8XicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeiF48V7Ae2oWy2m3bpZhN3J0IJ/RdePCji1X/jzX/jts1BWx8MPN6bYWZekEhh0HW/ncLa+sbmVnG7tLO7t39QPjxqmTjVjDdZLGPdCajhUijeRIGSdxLNaRRI3g7GNzO//cS1EbG6x0nC/YgOlQgFo2ilh04/60n+SHDaL1fcqjsHWSVeTiqQo9Evf/UGMUsjrpBJakzXcxP0M6pRMMmnpV5qeELZmA5511JFI278bH7xlJxZZUDCWNtSSObq74mMRsZMosB2RhRHZtmbif953RTDKz8TKkmRK7ZYFKaSYExm75OB0JyhnFhCmRb2VsJGVFOGNqSSDcFbfnmVtC6qXq1au6tV6td5HEU4gVM4Bw8uoQ630IAmMFDwDK/w5hjnxXl3PhatBSefOYY/cD5/AHGkkMY=</latexit>
<latexit sha1_base64="pmdXaQtx/RkbjEEj0JjG94undSA=">AAAB6HicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeiF48t2FZpQ9lsJ+3azSbsboQS+gu8eFDEqz/Jm//GbZuDtj4YeLw3w8y8IBFcG9f9dgpr6xubW8Xt0s7u3v5B+fCoreNUMWyxWMTqPqAaBZfYMtwIvE8U0igQ2AnGNzO/84RK81jemUmCfkSHkoecUWOl5kO/XHGr7hxklXg5qUCORr/81RvELI1QGiao1l3PTYyfUWU4Ezgt9VKNCWVjOsSupZJGqP1sfuiUnFllQMJY2ZKGzNXfExmNtJ5Ege2MqBnpZW8m/ud1UxNe+RmXSWpQssWiMBXExGT2NRlwhcyIiSWUKW5vJWxEFWXGZlOyIXjLL6+S9kXVq1VrzVqlfp3HUYQTOIVz8OAS6nALDWgBA4RneIU359F5cd6dj0VrwclnjuEPnM8fuYOM5A==</latexit>
<latexit sha1_base64="Jxyp7Y07VUyiyNEiRyqcT+XYQDk=">AAAB9HicbVDLSgNBEJz1GeMr6tHLYBA8hV0J6DGoB71FMA9IljA76U2GzD6c6Q2GZb/DiwdFvPox3vwbJ8keNLGgoajqprvLi6XQaNvf1srq2vrGZmGruL2zu7dfOjhs6ihRHBo8kpFqe0yDFCE0UKCEdqyABZ6Elje6nvqtMSgtovABJzG4ARuEwhecoZFcv5d2EZ4wvbnLsl6pbFfsGegycXJSJjnqvdJXtx/xJIAQuWRadxw7RjdlCgWXkBW7iYaY8REbQMfQkAWg3XR2dEZPjdKnfqRMhUhn6u+JlAVaTwLPdAYMh3rRm4r/eZ0E/Us3FWGcIIR8vshPJMWIThOgfaGAo5wYwrgS5lbKh0wxjianognBWXx5mTTPK061Ur2vlmtXeRwFckxOyBlxyAWpkVtSJw3CySN5Jq/kzRpbL9a79TFvXbHymSPyB9bnDziakmY=</latexit>
<latexit sha1_base64="rQRE9g+W9jgLwJeS3fY3jHXSBJo=">AAAB83icbVDLSgNBEJyNrxhfUY9eBoPgKexKQI9BPXiMYB6QXcLspDcZMvtgplcMy/6GFw+KePVnvPk3TpI9aGJBQ1HVTXeXn0ih0ba/rdLa+sbmVnm7srO7t39QPTzq6DhVHNo8lrHq+UyDFBG0UaCEXqKAhb6Erj+5mfndR1BaxNEDThPwQjaKRCA4QyO5wSBzEZ4wu83zQbVm1+056CpxClIjBVqD6pc7jHkaQoRcMq37jp2glzGFgkvIK26qIWF8wkbQNzRiIWgvm9+c0zOjDGkQK1MR0rn6eyJjodbT0DedIcOxXvZm4n9eP8XgystElKQIEV8sClJJMaazAOhQKOAop4YwroS5lfIxU4yjialiQnCWX14lnYu606g37hu15nURR5mckFNyThxySZrkjrRIm3CSkGfySt6s1Hqx3q2PRWvJKmaOyR9Ynz+gO5IT</latexit>
<latexit sha1_base64="kZkYF8749t+4eNJpcL8A/9X3kSM=">AAAB83icbVBNS8NAEJ3Ur1q/qh69BIvgqSRS0GOxF48V7Ac0oWy2m3bpZhN2J2IJ+RtePCji1T/jzX/jts1BWx8MPN6bYWZekAiu0XG+rdLG5tb2Tnm3srd/cHhUPT7p6jhVlHVoLGLVD4hmgkvWQY6C9RPFSBQI1gumrbnfe2RK81g+4CxhfkTGkoecEjSSFw4zD9kTZq08H1ZrTt1ZwF4nbkFqUKA9rH55o5imEZNIBdF64DoJ+hlRyKlgecVLNUsInZIxGxgqScS0ny1uzu0Lo4zsMFamJNoL9fdERiKtZ1FgOiOCE73qzcX/vEGK4Y2fcZmkyCRdLgpTYWNszwOwR1wximJmCKGKm1ttOiGKUDQxVUwI7urL66R7VXcb9cZ9o9a8LeIowxmcwyW4cA1NuIM2dIBCAs/wCm9War1Y79bHsrVkFTOn8AfW5w+etZIS</latexit>
<latexit sha1_base64="e4/cShLnpc3G1u2go1/xMgSgs0s=">AAAB8XicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeiF48V7Ae2oWy2m3bpZhN3J0IJ/RdePCji1X/jzX/jts1BWx8MPN6bYWZekEhh0HW/ncLa+sbmVnG7tLO7t39QPjxqmTjVjDdZLGPdCajhUijeRIGSdxLNaRRI3g7GNzO//cS1EbG6x0nC/YgOlQgFo2ilh04/60n+SHDaL1fcqjsHWSVeTiqQo9Evf/UGMUsjrpBJakzXcxP0M6pRMMmnpV5qeELZmA5511JFI278bH7xlJxZZUDCWNtSSObq74mMRsZMosB2RhRHZtmbif953RTDKz8TKkmRK7ZYFKaSYExm75OB0JyhnFhCmRb2VsJGVFOGNqSSDcFbfnmVtC6qXq1au6tV6td5HEU4gVM4Bw8uoQ630IAmMFDwDK/w5hjnxXl3PhatBSefOYY/cD5/AHGkkMY=</latexit>
<latexit sha1_base64="FgDtDkQujOEF3ZM1C6wBi95/Q5Y=">AAAB7nicbVDLSgNBEOyNrxhfUY9eBoMQL2FXAnoMevEYwTwkWcPspJMMmZ1dZmaFsOQjvHhQxKvf482/cZLsQRMLGoqqbrq7glhwbVz328mtrW9sbuW3Czu7e/sHxcOjpo4SxbDBIhGpdkA1Ci6xYbgR2I4V0jAQ2ArGNzO/9YRK80jem0mMfkiHkg84o8ZKrYfHtMzPp71iya24c5BV4mWkBBnqveJXtx+xJERpmKBadzw3Nn5KleFM4LTQTTTGlI3pEDuWShqi9tP5uVNyZpU+GUTKljRkrv6eSGmo9SQMbGdIzUgvezPxP6+TmMGVn3IZJwYlWywaJIKYiMx+J32ukBkxsYQyxe2thI2ooszYhAo2BG/55VXSvKh41Ur1rlqqXWdx5OEETqEMHlxCDW6hDg1gMIZneIU3J3ZenHfnY9Gac7KZY/gD5/MHvsCPMA==</latexit>
<latexit sha1_base64="e4/cShLnpc3G1u2go1/xMgSgs0s=">AAAB8XicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeiF48V7Ae2oWy2m3bpZhN3J0IJ/RdePCji1X/jzX/jts1BWx8MPN6bYWZekEhh0HW/ncLa+sbmVnG7tLO7t39QPjxqmTjVjDdZLGPdCajhUijeRIGSdxLNaRRI3g7GNzO//cS1EbG6x0nC/YgOlQgFo2ilh04/60n+SHDaL1fcqjsHWSVeTiqQo9Evf/UGMUsjrpBJakzXcxP0M6pRMMmnpV5qeELZmA5511JFI278bH7xlJxZZUDCWNtSSObq74mMRsZMosB2RhRHZtmbif953RTDKz8TKkmRK7ZYFKaSYExm75OB0JyhnFhCmRb2VsJGVFOGNqSSDcFbfnmVtC6qXq1au6tV6td5HEU4gVM4Bw8uoQ630IAmMFDwDK/w5hjnxXl3PhatBSefOYY/cD5/AHGkkMY=</latexit>
<latexit sha1_base64="e4/cShLnpc3G1u2go1/xMgSgs0s=">AAAB8XicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeiF48V7Ae2oWy2m3bpZhN3J0IJ/RdePCji1X/jzX/jts1BWx8MPN6bYWZekEhh0HW/ncLa+sbmVnG7tLO7t39QPjxqmTjVjDdZLGPdCajhUijeRIGSdxLNaRRI3g7GNzO//cS1EbG6x0nC/YgOlQgFo2ilh04/60n+SHDaL1fcqjsHWSVeTiqQo9Evf/UGMUsjrpBJakzXcxP0M6pRMMmnpV5qeELZmA5511JFI278bH7xlJxZZUDCWNtSSObq74mMRsZMosB2RhRHZtmbif953RTDKz8TKkmRK7ZYFKaSYExm75OB0JyhnFhCmRb2VsJGVFOGNqSSDcFbfnmVtC6qXq1au6tV6td5HEU4gVM4Bw8uoQ630IAmMFDwDK/w5hjnxXl3PhatBSefOYY/cD5/AHGkkMY=</latexit>
<latexit sha1_base64="FgDtDkQujOEF3ZM1C6wBi95/Q5Y=">AAAB7nicbVDLSgNBEOyNrxhfUY9eBoMQL2FXAnoMevEYwTwkWcPspJMMmZ1dZmaFsOQjvHhQxKvf482/cZLsQRMLGoqqbrq7glhwbVz328mtrW9sbuW3Czu7e/sHxcOjpo4SxbDBIhGpdkA1Ci6xYbgR2I4V0jAQ2ArGNzO/9YRK80jem0mMfkiHkg84o8ZKrYfHtMzPp71iya24c5BV4mWkBBnqveJXtx+xJERpmKBadzw3Nn5KleFM4LTQTTTGlI3pEDuWShqi9tP5uVNyZpU+GUTKljRkrv6eSGmo9SQMbGdIzUgvezPxP6+TmMGVn3IZJwYlWywaJIKYiMx+J32ukBkxsYQyxe2thI2ooszYhAo2BG/55VXSvKh41Ur1rlqqXWdx5OEETqEMHlxCDW6hDg1gMIZneIU3J3ZenHfnY9Gac7KZY/gD5/MHvsCPMA==</latexit>
<latexit sha1_base64="nk/O4KvoHG7nWgCKsKo+m2fcKfM=">AAAB/nicbVBNS8NAEN3Ur1q/ouLJy2IRPJVECnos9uKxgm2VJoTNdtMu3WzC7kQsoeBf8eJBEa/+Dm/+G7dtDtr6YODx3gwz88JUcA2O822VVlbX1jfKm5Wt7Z3dPXv/oKOTTFHWpolI1F1INBNcsjZwEOwuVYzEoWDdcNSc+t0HpjRP5C2MU+bHZCB5xCkBIwX2kSdJKEhwj6Mg94A9Qt6cTAK76tScGfAycQtSRQVagf3l9ROaxUwCFUTrnuuk4OdEAaeCTSpepllK6IgMWM9QSWKm/Xx2/gSfGqWPo0SZkoBn6u+JnMRaj+PQdMYEhnrRm4r/eb0Moks/5zLNgEk6XxRlAkOCp1ngPleMghgbQqji5lZMh0QRCiaxignBXXx5mXTOa269Vr+pVxtXRRxldIxO0Bly0QVqoGvUQm1EUY6e0St6s56sF+vd+pi3lqxi5hD9gfX5A3C7lc8=</latexit>
<latexit sha1_base64="vQ3/3W0LApHcMPwu9rwFnhCUcHI=">AAAB73icbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeiF48V7Ie0oWw2k3bpZhN3N0Ip/RNePCji1b/jzX/jts1BWx8MPN6bYWZekAqujet+O4W19Y3NreJ2aWd3b/+gfHjU0kmmGDZZIhLVCahGwSU2DTcCO6lCGgcC28HoZua3n1Bpnsh7M07Rj+lA8ogzaqzU6YUoDCUP/XLFrbpzkFXi5aQCORr98lcvTFgWozRMUK27npsaf0KV4UzgtNTLNKaUjegAu5ZKGqP2J/N7p+TMKiGJEmVLGjJXf09MaKz1OA5sZ0zNUC97M/E/r5uZ6MqfcJlmBiVbLIoyQUxCZs+TkCtkRowtoUxxeythQ6ooMzaikg3BW355lbQuql6tWrurVerXeRxFOIFTOAcPLqEOt9CAJjAQ8Ayv8OY8Oi/Ou/OxaC04+cwx/IHz+QOaD4+w</latexit>
<latexit sha1_base64="fAxixLzqPP0ibRTKMP9FMpa2phA=">AAACB3icbZDLSsNAFIZPvNZ6i7oUZLAIlUJJpKAboejGZQV7o41lMpm0QycXZiZCCd258VXcuFDEra/gzrdxmnahrT8MfPznHM6c3405k8qyvo2l5ZXVtfXcRn5za3tn19zbb8goEYTWScQj0XKxpJyFtK6Y4rQVC4oDl9OmO7ye1JsPVEgWhXdqFFMnwP2Q+Yxgpa2eedS+T4usZJ+O0SXKWFMJdT3KFUbtnlmwylYmtAj2DAowU61nfnW9iCQBDRXhWMqObcXKSbFQjHA6zncTSWNMhrhPOxpDHFDppNkdY3SiHQ/5kdAvVChzf0+kOJByFLi6M8BqIOdrE/O/WidR/oWTsjBOFA3JdJGfcKQiNAkFeUxQovhIAyaC6b8iMsACE6Wjy+sQ7PmTF6FxVrYr5cptpVC9msWRg0M4hiLYcA5VuIEa1IHAIzzDK7wZT8aL8W58TFuXjNnMAfyR8fkD8P+W0w==</latexit>
<latexit sha1_base64="fAxixLzqPP0ibRTKMP9FMpa2phA=">AAACB3icbZDLSsNAFIZPvNZ6i7oUZLAIlUJJpKAboejGZQV7o41lMpm0QycXZiZCCd258VXcuFDEra/gzrdxmnahrT8MfPznHM6c3405k8qyvo2l5ZXVtfXcRn5za3tn19zbb8goEYTWScQj0XKxpJyFtK6Y4rQVC4oDl9OmO7ye1JsPVEgWhXdqFFMnwP2Q+Yxgpa2eedS+T4usZJ+O0SXKWFMJdT3KFUbtnlmwylYmtAj2DAowU61nfnW9iCQBDRXhWMqObcXKSbFQjHA6zncTSWNMhrhPOxpDHFDppNkdY3SiHQ/5kdAvVChzf0+kOJByFLi6M8BqIOdrE/O/WidR/oWTsjBOFA3JdJGfcKQiNAkFeUxQovhIAyaC6b8iMsACE6Wjy+sQ7PmTF6FxVrYr5cptpVC9msWRg0M4hiLYcA5VuIEa1IHAIzzDK7wZT8aL8W58TFuXjNnMAfyR8fkD8P+W0w==</latexit>
<latexit sha1_base64="9UY9Tzo0xSguDEfPuYbWHGsRnh0=">AAAB7nicbVDLSgNBEOyNrxhfUY9eBoMQL2FXAnoMevEYwTwkWcPsZJIMmZ1dZnqFsOQjvHhQxKvf482/cZLsQRMLGoqqbrq7glgKg6777eTW1jc2t/LbhZ3dvf2D4uFR00SJZrzBIhnpdkANl0LxBgqUvB1rTsNA8lYwvpn5rSeujYjUPU5i7od0qMRAMIpWaj08pmX3fNorltyKOwdZJV5GSpCh3it+dfsRS0KukElqTMdzY/RTqlEwyaeFbmJ4TNmYDnnHUkVDbvx0fu6UnFmlTwaRtqWQzNXfEykNjZmEge0MKY7MsjcT//M6CQ6u/FSoOEGu2GLRIJEEIzL7nfSF5gzlxBLKtLC3EjaimjK0CRVsCN7yy6ukeVHxqpXqXbVUu87iyMMJnEIZPLiEGtxCHRrAYAzP8ApvTuy8OO/Ox6I152Qzx/AHzucPZ+qO9w==</latexit>
<latexit sha1_base64="4l4tyu1wQNISUpBmeK1HI2A9PJA=">AAAB7nicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkUI9FLx4r2A9pQ9lsN+3SzSbsToQS+iO8eFDEq7/Hm//GbZuDtj4YeLw3w8y8IJHCoOt+O4WNza3tneJuaW//4PCofHzSNnGqGW+xWMa6G1DDpVC8hQIl7yaa0yiQvBNMbud+54lrI2L1gNOE+xEdKREKRtFKnf6YYvY4G5QrbtVdgKwTLycVyNEclL/6w5ilEVfIJDWm57kJ+hnVKJjks1I/NTyhbEJHvGepohE3frY4d0YurDIkYaxtKSQL9fdERiNjplFgOyOKY7PqzcX/vF6K4bWfCZWkyBVbLgpTSTAm89/JUGjOUE4toUwLeythY6opQ5tQyYbgrb68TtpXVa9Wrd3XKo2bPI4inME5XIIHdWjAHTShBQwm8Ayv8OYkzovz7nwsWwtOPnMKf+B8/gCDZ4+x</latexit>
<latexit sha1_base64="ErhthiE4wcyOR5LhKX4LnvmTkNM=">AAAB+HicbVBNS8NAEN34WetHox69LBahXkoiBb0IRS+epIL9oo1ls922SzebsDsRasgv8eJBEa/+FG/+G7dtDtr6YODx3gwz8/xIcA2O822trK6tb2zmtvLbO7t7BXv/oKHDWFFWp6EIVcsnmgkuWR04CNaKFCOBL1jTH19P/eYjU5qH8h4mEfMCMpR8wCkBI/XsQndEIGmnl+2HpHR7mvbsolN2ZsDLxM1IEWWo9eyvbj+kccAkUEG07rhOBF5CFHAqWJrvxppFhI7JkHUMlSRg2ktmh6f4xCh9PAiVKQl4pv6eSEig9STwTWdAYKQXvan4n9eJYXDhJVxGMTBJ54sGscAQ4mkKuM8VoyAmhhCquLkV0xFRhILJKm9CcBdfXiaNs7JbKVfuKsXqVRZHDh2hY1RCLjpHVXSDaqiOKIrRM3pFb9aT9WK9Wx/z1hUrmzlEf2B9/gAd6pK9</latexit>
<latexit sha1_base64="ErhthiE4wcyOR5LhKX4LnvmTkNM=">AAAB+HicbVBNS8NAEN34WetHox69LBahXkoiBb0IRS+epIL9oo1ls922SzebsDsRasgv8eJBEa/+FG/+G7dtDtr6YODx3gwz8/xIcA2O822trK6tb2zmtvLbO7t7BXv/oKHDWFFWp6EIVcsnmgkuWR04CNaKFCOBL1jTH19P/eYjU5qH8h4mEfMCMpR8wCkBI/XsQndEIGmnl+2HpHR7mvbsolN2ZsDLxM1IEWWo9eyvbj+kccAkUEG07rhOBF5CFHAqWJrvxppFhI7JkHUMlSRg2ktmh6f4xCh9PAiVKQl4pv6eSEig9STwTWdAYKQXvan4n9eJYXDhJVxGMTBJ54sGscAQ4mkKuM8VoyAmhhCquLkV0xFRhILJKm9CcBdfXiaNs7JbKVfuKsXqVRZHDh2hY1RCLjpHVXSDaqiOKIrRM3pFb9aT9WK9Wx/z1hUrmzlEf2B9/gAd6pK9</latexit>
<latexit sha1_base64="9UY9Tzo0xSguDEfPuYbWHGsRnh0=">AAAB7nicbVDLSgNBEOyNrxhfUY9eBoMQL2FXAnoMevEYwTwkWcPsZJIMmZ1dZnqFsOQjvHhQxKvf482/cZLsQRMLGoqqbrq7glgKg6777eTW1jc2t/LbhZ3dvf2D4uFR00SJZrzBIhnpdkANl0LxBgqUvB1rTsNA8lYwvpn5rSeujYjUPU5i7od0qMRAMIpWaj08pmX3fNorltyKOwdZJV5GSpCh3it+dfsRS0KukElqTMdzY/RTqlEwyaeFbmJ4TNmYDnnHUkVDbvx0fu6UnFmlTwaRtqWQzNXfEykNjZmEge0MKY7MsjcT//M6CQ6u/FSoOEGu2GLRIJEEIzL7nfSF5gzlxBLKtLC3EjaimjK0CRVsCN7yy6ukeVHxqpXqXbVUu87iyMMJnEIZPLiEGtxCHRrAYAzP8ApvTuy8OO/Ox6I152Qzx/AHzucPZ+qO9w==</latexit>
<latexit sha1_base64="pmdXaQtx/RkbjEEj0JjG94undSA=">AAAB6HicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeiF48t2FZpQ9lsJ+3azSbsboQS+gu8eFDEqz/Jm//GbZuDtj4YeLw3w8y8IBFcG9f9dgpr6xubW8Xt0s7u3v5B+fCoreNUMWyxWMTqPqAaBZfYMtwIvE8U0igQ2AnGNzO/84RK81jemUmCfkSHkoecUWOl5kO/XHGr7hxklXg5qUCORr/81RvELI1QGiao1l3PTYyfUWU4Ezgt9VKNCWVjOsSupZJGqP1sfuiUnFllQMJY2ZKGzNXfExmNtJ5Ege2MqBnpZW8m/ud1UxNe+RmXSWpQssWiMBXExGT2NRlwhcyIiSWUKW5vJWxEFWXGZlOyIXjLL6+S9kXVq1VrzVqlfp3HUYQTOIVz8OAS6nALDWgBA4RneIU359F5cd6dj0VrwclnjuEPnM8fuYOM5A==</latexit>
<latexit sha1_base64="YET3kkOS2zY8mMv2K4bjQDXERhw=">AAAB6HicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeiF48t2A9oQ9lsJ+3azSbsboQS+gu8eFDEqz/Jm//GbZuDtj4YeLw3w8y8IBFcG9f9dgobm1vbO8Xd0t7+weFR+fikreNUMWyxWMSqG1CNgktsGW4EdhOFNAoEdoLJ3dzvPKHSPJYPZpqgH9GR5CFn1FipyQblilt1FyDrxMtJBXI0BuWv/jBmaYTSMEG17nluYvyMKsOZwFmpn2pMKJvQEfYslTRC7WeLQ2fkwipDEsbKljRkof6eyGik9TQKbGdEzVivenPxP6+XmvDGz7hMUoOSLReFqSAmJvOvyZArZEZMLaFMcXsrYWOqKDM2m5INwVt9eZ20r6perVpr1ir12zyOIpzBOVyCB9dQh3toQAsYIDzDK7w5j86L8+58LFsLTj5zCn/gfP4AyKuM7g==</latexit>
<latexit sha1_base64="e4/cShLnpc3G1u2go1/xMgSgs0s=">AAAB8XicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeiF48V7Ae2oWy2m3bpZhN3J0IJ/RdePCji1X/jzX/jts1BWx8MPN6bYWZekEhh0HW/ncLa+sbmVnG7tLO7t39QPjxqmTjVjDdZLGPdCajhUijeRIGSdxLNaRRI3g7GNzO//cS1EbG6x0nC/YgOlQgFo2ilh04/60n+SHDaL1fcqjsHWSVeTiqQo9Evf/UGMUsjrpBJakzXcxP0M6pRMMmnpV5qeELZmA5511JFI278bH7xlJxZZUDCWNtSSObq74mMRsZMosB2RhRHZtmbif953RTDKz8TKkmRK7ZYFKaSYExm75OB0JyhnFhCmRb2VsJGVFOGNqSSDcFbfnmVtC6qXq1au6tV6td5HEU4gVM4Bw8uoQ630IAmMFDwDK/w5hjnxXl3PhatBSefOYY/cD5/AHGkkMY=</latexit>
<latexit sha1_base64="t5Z8j6uw1dDrul9BiPoeucvIxm0=">AAAB6HicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeiF48t2A9oQ9lsJ+3azSbsboQS+gu8eFDEqz/Jm//GbZuDtj4YeLw3w8y8IBFcG9f9dgobm1vbO8Xd0t7+weFR+fikreNUMWyxWMSqG1CNgktsGW4EdhOFNAoEdoLJ3dzvPKHSPJYPZpqgH9GR5CFn1FipqQflilt1FyDrxMtJBXI0BuWv/jBmaYTSMEG17nluYvyMKsOZwFmpn2pMKJvQEfYslTRC7WeLQ2fkwipDEsbKljRkof6eyGik9TQKbGdEzVivenPxP6+XmvDGz7hMUoOSLReFqSAmJvOvyZArZEZMLaFMcXsrYWOqKDM2m5INwVt9eZ20r6perVpr1ir12zyOIpzBOVyCB9dQh3toQAsYIDzDK7w5j86L8+58LFsLTj5zCn/gfP4A4OuM/g==</latexit>
<latexit sha1_base64="NkuDH4GMVAhCg8VuQG4AALMi2Sw=">AAACNHicbVBNSxxBEO3RxJhNNKsec2myBFaIy0xYMBdB9BLIxUBWd9jZDDU9NdrY0zN214jLMD/Kiz8kFwnkEJFc8xvSu+4hfhQ0vH6vXnXXS0olLfn+T29h8dnzpRfLL1uvXq+svmmvrR/aojICB6JQhRkmYFFJjQOSpHBYGoQ8UXiUnO5P9aNzNFYW+htNShzncKxlJgWQo+L2lyhFRcBDvsO3IuWMKfBIQ6IgDnkW1xHhBdX7TdMduovCM07NBx5u8si4uRTX4U74ve7KzaaJ2x2/58+KPwbBHHTYvA7i9o8oLUSVoyahwNpR4Jc0rsGQFAqbVlRZLEGcwjGOHNSQox3Xs6Ub/t4xKc8K444mPmP/d9SQWzvJE9eZA53Yh9qUfEobVZR9GtdSlxWhFncPZZXiVPBpgjyVBgWpiQMgjHR/5eIEDAhyObdcCMHDlR+Dw4+9oN/rf+13dvfmcSyzt+wd67KAbbNd9pkdsAET7JJds9/sxrvyfnm33p+71gVv7tlg98r7+w94eaot</latexit>
3 MODEL FRAMEWORK
Simulation basics A physical trajectory, measured at discrete time intervals, is a sequence of states, (X1, . . . , XT ), where Xt represents properties such as the positions, velocities, masses, etc, of elements of the system. A physical simulator, s, is a function that maps current and/or previous state(s), which we term the context, X≤t, to a predicted future state, X̂t+1 = s(X≤t) (see Figure 1a)1. A simulated physical trajectory, termed a rollout, (Xt, X̂t+1, X̂t+2, . . . ), can be generated by repeatedly applying s to its own predicted state, X̂t+1 = s(X̂≤t).
Simulators are often comprised of a PREDICTOR mechanism which maps the context X≤t to an update value Ŷ , that represents information about the system’s temporal evolution at the current time. Then Ŷ is used by an UPDATER mechanism to update the current state to the next state: X̂t+1 = UPDATER(X≤t, Ŷ ), e.g., updating current positions and velocities represented by Xt with new velocities and accelerations represented by Ŷ , to predict the next state.
Explicit simulators Across science, engineering, and graphics, a popular class of simulators are defined explicitly: the state update Ŷ is predicted directly fromX≤t using an explicit forward function, Ŷ = fD(X≤t), as illustrated in Figure 1b. Among the rapidly growing family of learned simulators, the forward function fD is typically implemented using a neural network (Sanchez-Gonzalez et al., 2020; Pfaff et al., 2021).
Constraint-based implicit simulators Here we explore learned simulators based on implicit formulations of the dynamics. Rather than predicting the desired state directly, as in explicit formulations, our implicit simulator uses a differentiable constraint function, c = fC(X≤t, Ŷ ), where c is a scalar that quantifies how well a proposed state update Ŷ agrees withX≤t. A future prediction is generated by applying a solver, such as an optimization or zero-finding algorithm, to find a Ŷ that satisfies the constraint function, and applying the UPDATER to update Xt to X̂t+1. The fC can represent all the physical constraints in the system, including the time dynamics.
1Despite that physics is Markovian, we use X≤t as input because our framework can also apply to dynamic processes which are non-Markovian. Providing previous states can also often be helpful when there are hidden properties of the system which are only identifiable over a sequence of observed states, and when a state does not represent velocity or momentum information.
As illustrated in Figure 1d, we formulate our constraint-solving procedure via an iterative method that starts with an initial proposal, Y (0). On the i-th iteration, the solver uses the gradient of fC w.r.t. Y at the current proposal to compute a change to the proposal, δY = −λ ∇Y fC(X≤t, Y )|Y=Y (i) . This δY is then used to revise the proposal to, Y (i+1) = Y (i) + δY . This process repeats for N steps, and the final proposal value is treated as the PREDICTOR’s output, Ŷ = Y (N).
Our constraint-based model’s fC is defined as a trainable function approximator which is real-valued and lower bounded at zero, and uses gradient descent to find Ŷ that minimizes it, where λ is a fixed step size. This induces the semantics that the desired Ŷ = arg minY fC(X≤t, Y ).
We also explore a second constraint-solving procedure, inspired by Yang et al. (2020)’s Neural Projections’ use of “fast projection” Goldenthal et al. (2007). Specifically, λ = − fC(X≤t,Y (i))
‖∇Y fC(X≤t,Y )| Y =Y (i)
‖2 . Unlike gradient descent, fast projection is a zero-finding algorithm, so in
this case fC is not lower bounded. This induces the semantics that fC(X≤t, Ŷ ) = 0.
This general formulation of constraint-based learned simulation can be trained by backpropagating loss gradients through the solver loop2. The computational budget of the forward pass can be varied via the number of solver iterations N .
Explicit iterative simulators As a hybrid between forward and constraint-based simulators, we introduced a model which iteratively refines a proposed state update, like in the constraint-based approach described above, but using an explicit function to directly output a δY at each iteration, rather than solving a constraint function (see Figure 1c). See Section 4.3 for details.
4 EXPERIMENTS
4.1 EXPERIMENTAL TASK DOMAINS
We test our framework on a variety of physical environments, shown in Figure 2: ROPE, BOUNCING BALLS and BOUNCING RIGIDS, whose ground truth training and test data were generated by the MuJoCo physics simulator, as well as BOXBATH from Li et al. (2019). These environments demonstrate a diverse set of physical constraints: ‘hard’ constraints (preserving the shape of the rigid object and resolving collisions), and ‘soft’ constraints on fluid movement, handling gravity and preserving the momentum of the rope and bouncing balls. See the Supplementary Materials for details.
4.2 MODEL IMPLEMENTATIONS
Representing the physical system Our experimental domains are physical systems comprised of sets of interacting point-like elements, e.g., objects, particles, mesh vertices, etc. We represent the state as Xt = (p j t )
j=1...|Xt|, where |Xt| is the number of elements, and pjt is the j-th element’s position at time t. There are also other static properties of the physical elements, e.g., masses, material types, etc., which we represent with Z to keep it distinct from the dynamic state information represented by Xt. The input context is X≤t = (Z,Xt−3, Xt−2, Xt−1, Xt).
2Implicit differentiation at the solution point should be applicable as well, and potentially offer computational benefits as mentioned in the Section 2, though we do not explore that here.
In our implementation, Ŷ , represents the predicted changes in position (i.e., the “average velocity” across the time step)3, ŷj = ∆p̂jt+1 = p̂ j t+1 − pjt . The UPDATER then computes X̂t+1 using p̂jt+1 = p j t + ∆p̂ j t+1, where p j t is provided in the input X≤t.
Constructing the input graph Our implementations of the fD, fDI, and fC use GNNs as the function approximators, so we need to pack the context, X≤t, and (for the fDI and fC) the proposed state update information, Y (i), into an input graph, Gt = (Vt, Et). The edges Et represent possible interactions among the elements, such as fully connected edges to represent collisions and rigid attachments in BOUNCING BALLS and BOUNCING RIGIDS, spring constraints in ROPE, and interactions among particles within a fixed connectivity radius in BOXBATH.
We enforced translation-invariance by construction, by never providing absolute positions as input to the models. Instead, the j-th input node’s features are the static properties, and a sequence of the three most recent position changes (i.e. average velocities), vjt = [z j ,∆pjt−2,∆p j t−1,∆p j t ], where, ∆pjt = p j t − pjt−1. For fDI and fC, which also take the solver’s current proposed Y (i), we also concatenate the proposed average velocity from the i-th solver iteration, yj,(i) − pjt , as input. For the input edge feature for an edge that connects from node j to k, we also provide the relative displacement vector between the nodes’ positions, ejkt = p k t − pjt .
GNN-based Encode-Process-Decode core We implemented fD, fDI, and fC using Graph Networks (GN) (Battaglia et al., 2018), arranged in the Encode-Process-Decode architecture, similar to previous work on GN-based learned simulators (Sanchez-Gonzalez et al., 2018; 2020; Pfaff et al., 2021). The Encoder uses two MLPs to encode node and edge features into high-dimensional latent vectors. The Processor applies multiple GNs, with unshared weights, in sequence, with node and edge residual connections at each step. We do not use global updates for the GNs. The Decoder uses an MLP to produce an output for each node.
The fD directly returns Ŷ . The fDI returns a change to the proposed update δY for the current iteration. The fC’s Decoder returns a scalar for each node to produce a constraint value per node {cj |j = 1 . . . |V |}. These node-wise constraint values are averaged to compute a single scalar c constraint for the entire system, c = fC(X≤t, Ŷ ) = 1|V | ∑|V | j=1 c j .
Solving the constraint For fDI and fC we initialize Y (0) = ∆pjt to the most recent average velocity4. We used auto-differentiation in JAX to compute the gradient function,∇Y fC, and the step size λwas specific to the model variant, as described below. During training we used N = 5 solver iterations.
4.3 MODEL VARIANTS
The key questions in this work are whether constraint-based learned simulators can compete with explicit, forward learned simulators, whether implementing the constraint function with GNNs is more effective than with MLPs, and how minima-based constraint functions solved by gradient descent compare to constraints defined as the zeros of a function which are solved by fast projection (Goldenthal et al., 2007). The following model variants allow us to answer these questions.
Forward GNN This is an explicit, forward GNN-based learned simulator based on the GNS models from Sanchez-Gonzalez et al. (2020); Pfaff et al. (2021). It directly predicts the state update Ŷ from the past time points X≤t.
C-GNS Gradient Descent (C-GNS-GD) and C-GNS-Fast Projections (C-GNS-FP) These are our proposed constraint-based GNN models. For the C-GNS-GD, the scalar per-node output cj was squared, to force the overall fC to be non-negative, and a gradient descent solver with a fixed step size, λ = 0.001, was used to minimize it. For C-GNS-FP, the λ was based on “fast projection” (Goldenthal et al., 2007; Yang et al., 2020), as described in Section 3. Supplementary Figure B.5(c-d) shows ablations.
3For BOXBATH we vary a number of modelling choices to best match those in Sanchez-Gonzalez et al. (2020). The major difference is that we set Ŷ to be the average acceleration rather than average velocity. See Supplementary Materials for other differences.
4To ensure analogous information is provided downstream of fD, the update rule also includes the previous average velocity: p̂jt+1 = p j t + ∆p j t + ŷ j
Iterative GNN We implemented a hybrid between the Forward GNN and C-GNS, as shown in Figure 1c. It was identical to the C-GNS models, except its fDI directly predicted proposed state updates as in fD, rather than being computed via the gradients as was done with fC.
ConstraintMLP Gradient Descent (ConstraintMLP-GD) and ConstraintMLP-Fast Projections (ConstraintMLP-FP) These were MLP-based constraint models, which, rather than using GNNs to implement fC, instead concatenated the embeddings of all the input nodes into a single vector and passed them to an MLP implementation of fC. By default, these models cannot handle variable-length inputs, so we padded smaller states with zeros up to the maximum state size. The ConstraintMLP-FP was the MLP analog to our C-GNS-FP, and was similar to Neural Projections (Yang et al., 2020). The ConstraintMLP-GD used gradient descent, and was the MLP analog to our C-GNS-GD. We omit the results for the ConstraintMLP models on BOXBATH (1024 nodes), as MLPs do not generally work well on physical systems with more than a few particles (Battaglia et al., 2016; Sanchez-Gonzalez et al., 2018).
4.4 TRAINING AND EVALUATION
We trained the models to make next-step predictions, by computing the L2 loss between the predicted X̂t+1 and the corresponding ground truth Xt+1, averaged over nodes. All model weights and biases were trained using standard backpropagation with the Adam optimizer.
At test time, we compute 1-step metrics by evaluating the 1-step errors along each point of the ground truth trajectory. We also evaluate rollout errors by iteratively applying the learned model starting from an initial state, over 160 rollout steps, and computing the error between the predicted and ground truth trajectories.
5 RESULTS
Predictive accuracy5 Our experimental results show that our C-GNS-GD’s performance was generally better than the other model variants. Figure 3 compares the different models on 1-step and rollout position MSE (see Supplementary Table B.1 for numerical results). For each dataset, we
5Videos of the model rollouts are available at sites.google.com/view/constraint-based-simulator
used the same number of message-passing steps (MP) for all GN-based models. We used 2 MPs for the ROPE dataset, and 1 MP for all other tasks.
The C-GNS-GD has lower 1-step MSE between the ground truth and predicted positions than other models across all datasets. Qualitatively, we observed that for Forward GNN with a single messagepassing step, the box in BOXBATH “melts” over time, as the forward model cannot preserve its rigid shape (see Videos). The comparable C-GNS-GD, by contrast, maintains the rigidity more effectively. These quantitative results suggest that constraint-based learned simulators are competitive alternative to explicit, forward learned simulators. We generally found that the Iterative GNN was fairly competitive with the C-GNS-GD in overall performance and better than the Forward GNN.
We also found that the C-GNS-FP was generally less stable across seeds, and not as accurate as the C-GNS-GD. The same conclusion holds for ConstraintMLP-FP versus ConstraintMLP-GD. We speculate that the fast projection algorithm may make training challenging because the step size λ is proportional to fC, which may cause poor zero-finding early in training when the fC is not yet informative. Additionally, we find that C-GNS-FP algorithm becomes unstable in the areas with shallow constraint gradients, perhaps because its λ depends on the inverse of the gradient’s norm.
We explored how varying the message-passing steps and solver iterations (N ) influenced the relative performance among the models in our ROPE dataset. Figure 4 shows that the C-GNS-GD generally required fewer parameters and message-passing steps to achieve comparable 1-step MSE to the other models. Supplementary Figure B.3 shows similar results for the rollout MSE. For most combinations of message-passing steps and number of solver iterations, C-GNS-GD (green) outperforms the Iterative GNN (yellow), C-GNS-FP (purple) as well as the Forward GNN (blue) with the same number of MPs (the Forward GNN is not iterative model, so we plot it as a single bar). We hypothesize that the solver iterations in the C-GNS and Iterative GNN may play a similar role to message passing with shared weights .
Interpreting the learned constraints To better understand the learned fC functions in the C-GNSGD, Figure 5 visualizes the node-wise constraint values as a function of Y (proposed average velocity) for different nodes in the ROPE dataset while holding the other nodes’ proposed update Y fixed.
We also overlay the sequence of five points that represent the proposed Y (i) steps from the solver where all nodes were jointly optimized. The figure shows the learned fC has a minimum near the ground truth Y, which the gradient descent steps are able to reach.
Incorporating novel constraints at test time We next explored a unique advantage of the constraint-based model: because the fC measures the degree the physical constraints are violated, we can incorporate additional, hand-designed constraints at test time, and use the model to potentially satisfy them. For the ROPE dataset, we designed three constraint functions that return positive values which increase quadratically as the rope enters different “forbidden” regions of the space: a vertical wall, a horizontal floor, and a disk-shaped region. We weighted these constraint terms by a coefficient hyperparameter and added each of the hand-designed constraints to the learned fC term of C-GNS-GD and ran the forward evaluation of the model.
As shown in Figure 6, the model was able to simulate the dynamics in a way that the corresponding forbidden region was avoided. In some cases, satisfying the joint constraint resulted in unintuitive behaviors, such as the rope links changing in length to adapt to the obstacle (Videos). However, this is to be expected, as the minimum of the joint constraint may not overlap with the minimum of the learned constraint, which is the one that would otherwise guarantee length preservation. For this example we added a further hand-designed constraint that incentivizes maintaining relative distances between nodes. In general this is a powerful example of how constraint-based models can generalize outside their training data, and solve both for the learned dynamics and arbitrary desired constraints.
Generalizing to larger systems via increased solver iterations In principle, iterative and constraint-based simulators should find more accurate solutions by increasing the number of solver iterations, N . We investigated whether the C-GNS-GD and Iterative GNN trained on ROPE could generalize from Ntrain = 5 on which they were trained, to Ntest ∈ [0, 15]. We also analyzed whether increased solver iterations could improve generalize performance from training on ropes with 5−10 nodes, to test ropes with 20 nodes.
Figure 7a (top row) shows that for test ropes that match the 5−10 nodes experienced during training, the Iterative GNN (light blue) overfits very heavily to Ntest = Ntrain = 5: error increases abruptly for N ≤ 4 and N ≥ 6. By contrast, the C-GNS-GD (light red) generalizes much better to different Ntest. Figure 7a (bottom row) shows that for test ropes with 20 nodes, the Iterative GNN again overfits, while the C-GNS-GD can generalize well to longer ropes if Ntest is increased.
We also trained the Iterative GNN and C-GNS-GD with additional loss terms that were applied to the Y (i) on each solver iteration, not only the final one, Ŷ = Y (N). We used an exponential decay factor, α = 0.25, which downweighted this additional loss term more heavily for earlier solver proposals. The dark blue and red curves in Figure 7a show how this additional loss further improves generalization to more solver iterations and larger systems as test time for the Iterative GNN, but especially the C-GNS-GD. Figure 7b visualizes how increasing the solver iterations systematically improves the quality of the long-term rollout accuracy in the ROPE dataset.
Together these results show the C-GNS-GD is effective in making use of additional resources at test time. This opens the exciting possibility of training on small, simple systems, and testing on large, complex systems. See Supplementary Figure B.2 for further details.
6 DISCUSSION
We presented a general-purpose framework for constraint-based learned simulation, where a learned constraint function implicitly represents the dynamics, and future predictions are generated via a constraint solver. We implemented our framework using GNNs as the constraint function and gradient descent as the constraint solver, and tested it in a variety of challenging physical simulation problems. Our results showed that our C-GNS has competitive or better performance compared to previous learned simulators. We demonstrated unique abilities to generalize to novel, hand-designed constraints, and use more solver iterations than experienced during training to improve the accuracy on larger systems.
We can hypothesize about the relationship between explicit, forward learned simulators and implicit, constraint-based ones in terms of the sharing schemes of these architectures. The C-GNS has a stronger inductive bias than the Forward GNN. The transformation of fC in C-GNS effectively ties the parameters in the resulting ∇Y fC function, and the solver iterations are analogous to how a recurrent neural network’s parameters are shared over iterations. In contrast, the message-passing steps in the Forward GNN used in our work are unshared. In principle, the fD of the Forward GNN is more expressive because if given enough depth, after training it could learn to take parameter values that are equivalent to the shared parameters of C-GNS. Our results shown in Figure 4 supports this possibility: the Forward GNN with many more message-passing steps eventually approaches the C-GNS’s performance. Moreover, we speculate the C-GNS’s inductive biases contribute to its advantages in terms of incorporating novel hand-designed constraints and generalizing to more solver iterations and larger systems.
More broadly, the performance, generality and unique advantages of constraint-based learned simulation make it an important new direction in the advancement of machine learning methods for complex simulation problems in science and engineering.
7 REPRODUCIBILITY STATEMENT
We are committed to open-source the model code after the paper is accepted. Also, we are going to open-source the MuJoCo datasets that we generated for this paper. We provide more details on the model implementation as well as the hyperparameters used for each model in the Supplementary Material. | 1. What are the strengths and weaknesses of the proposed method in incorporating explicit physical constraints in learning-based simulation frameworks?
2. How does the method compare to prior works in terms of experiment environments and generalization abilities?
3. What are the limitations regarding the expressiveness of the experiments and the missing details of the method?
4. How might the incorporated constraints lead to better/larger-scale generalization than what's already shown in the literature?
5. What are the necessary details on constructing the constraint function, f_C, for different physical scenarios?
6. Are there any assumptions or trade-offs regarding the differentiability of the constraint function, computational resources, and time spent on each forward pass?
7. How do the three methods (Iterative GNN, C-GNS-GD, and C-GNS-FP) compare in terms of performance gain and computational efficiency?
8. How should one choose the weight on the constraint term when incorporating novel constraints at test time?
9. How does this paper's approach to incorporating new boundary constraints during testing compare to previous works such as [1, 3]? | Summary Of The Paper
Review | Summary Of The Paper
This paper aims to add explicit/human-defined constraints to learning-based simulation frameworks, where a learned constraint function implicitly regularizes the dynamics, and future predictions are generated via a constraint solver. The authors built the framework on top of graph neural networks (GNNs) to capture the compositionality of the underlying system and enforce the constraint using an implicit constraint function optimized via gradient descent.
The authors tested the proposed method in four physical simulation environments, including rope, bouncing balls, bouncing rigids, and BoxBath. Experiment results show that the proposed C-GNS has a competitive or better performance compared to prior learned simulators. In addition, they have also demonstrated that C-GNS can generalize to unseen, hand-designed constraints by applying more solver iterations than experienced during training to improve the accuracy on larger systems.
Review
[Strength]
This paper tackles an important question of how we can incorporate prior knowledge in the form of explicit physical constraints in the learning-based simulators to enable better generalization.
The experiments on four environments show that the proposed method can deliver better prediction results than unconstrained baselines.
[Weakness]
While I like the direction this paper is going, I have concerns regarding the expressiveness of the experiments and the missing details of the method.
Prior methods on learning-based physics simulators have shown results on a set of much larger-scale and more complex environments involving fluids on rough terrain, fabrics with novel geometries, etc. [1, 2]. The experiment environments used in this paper may be a bit too simple compared to what's out there in the literature, making it hard to know how the method works in larger and more complicated scenarios.
Continuing from my previous point, [1] showed generalization to environments with drastically different geometry than seen during training, and [2] showed that the model could scale up to significantly larger and more complex cloth than seen during training. Adding constraints based on our understanding of physics is supposed to improve the model's generalization ability. As a result, I don't think the current experiments in the paper are enough to demonstrate the benefit of the constrained optimization process. The authors should consider including concrete experimental evidence on how the incorporated constraints may lead to even better/larger-scale generalization than what's already shown in the literature.
The authors should also consider including more details on how they construct the constraint function, f_C, e.g., for the fluid, rigid object, boundary conditions, etc. Without further details, it is hard for me to imagine how they are defined and implemented.
Related to my previous point, do we need any assumptions on f_C other than being differentiable? For example, for discontinuous events like contacts, I imagine we can differentiate through the LCP constraints, but how useful are the gradients?
How much more computing resources and time are needed to apply gradient descent on the constraint function? Multi-step message passing and solver iterations do not come for free. It may significantly increase forward prediction time for each time step. Therefore, it is essential to provide the time spent on each forward pass for different design choices and discuss the trade-off between the performance gain and the decrease in computational efficiency.
According to Figure 3, it is a bit hard to know which of the following three methods works better: (i) Iterative GNN, (ii) C-GNS-GD, and (iii) C-GNS-FP. When giving a new scenario, is there a way to know which one we should use, or should we try all of them and choose the one that works the best?
How did you choose the weight on the constraint term when incorporating novel constraints at test time? From the video, the rope is jittering. What might be the reason, and is it possible to resolve it? [1] shows generalization results of fluid simulation on unseen terrains much different from what the model was trained on. [3] also showed examples of generalization to unseen obstacle configurations. How would you compare the way your paper and [1, 3] incorporate new boundary constraints during testing?
[1] Benjamin Ummenhofer, Lukas Prantl, Nils Thuerey, Vladlen Koltun, "Lagrangian Fluid Simulation with Continuous Convolutions" [2] Tobias Pfaff, Meire Fortunato, Alvaro Sanchez-Gonzalez, Peter W. Battaglia, "Learning Mesh-Based Simulation with Graph Networks" [3] Alvaro Sanchez-Gonzalez, Jonathan Godwin, Tobias Pfaff, Rex Ying, Jure Leskovec, Peter W. Battaglia, "Learning to Simulate Complex Physics with Graph Networks"
===================
[Post Rebuttal]
I thank the authors for the detailed feedback, which addressed most of my concerns. I hope the authors can incorporate the response into the manuscript to improve its clarity. I'm happy to raise my score from 5 to 6. |
ICLR | Title
Constraint-based graph network simulator
Abstract
In the rapidly advancing area of learned physical simulators, nearly all methods train a forward model that directly predicts future states from input states. However, many traditional simulation engines use a constraint-based approach instead of direct prediction. Here we present a framework for constraint-based learned simulation, where a scalar constraint function is implemented as a trainable function approximator, and future predictions are computed as the solutions to a constraint satisfaction problem. We implement our method using a graph neural network as the constraint function and gradient descent as the constraint solver. The architecture can be trained by standard backpropagation. We test the model on a variety of challenging physical domains, including simulated ropes, bouncing balls, colliding irregular shapes and splashing fluids. Our model achieves better or comparable performance to top learned simulators. A key advantage of our model is the ability to generalize to more solver iterations at test time to improve the simulation accuracy. We also show how hand-designed constraints can be added at test time to satisfy objectives which were not present in the training data, which is not possible with forward approaches. Our constraint-based framework is applicable to any setting in which forward learned simulators are used, and more generally demonstrates key ways that learned models can leverage popular methods in numerical methods.
N/A
In the rapidly advancing area of learned physical simulators, nearly all methods train a forward model that directly predicts future states from input states. However, many traditional simulation engines use a constraint-based approach instead of direct prediction. Here we present a framework for constraint-based learned simulation, where a scalar constraint function is implemented as a trainable function approximator, and future predictions are computed as the solutions to a constraint satisfaction problem. We implement our method using a graph neural network as the constraint function and gradient descent as the constraint solver. The architecture can be trained by standard backpropagation. We test the model on a variety of challenging physical domains, including simulated ropes, bouncing balls, colliding irregular shapes and splashing fluids. Our model achieves better or comparable performance to top learned simulators. A key advantage of our model is the ability to generalize to more solver iterations at test time to improve the simulation accuracy. We also show how hand-designed constraints can be added at test time to satisfy objectives which were not present in the training data, which is not possible with forward approaches. Our constraint-based framework is applicable to any setting in which forward learned simulators are used, and more generally demonstrates key ways that learned models can leverage popular methods in numerical methods.
1 INTRODUCTION
Consider a bowling ball colliding with a bowling pin. You might explain this event as involving a pair of forces being generated, one which causes the pin to move, and the other which causes the ball to careen away with a different direction and speed. This kind of intuitive cause-and-effect approach is analogous to physical simulators that apply an explicit forward model to calculate a future state directly from the current one, such as when numerically integrating discretized equations of motion.
An alternative, but equally valid, way to explain the collision is in terms of constraint satisfaction: the ball and pin cannot occupy the same location at the same time, and their combined energies and momenta must be conserved, so the post-collision trajectories are the only way the future can unfold without violating these constraints. This constraint-based approach is analogous to physical simulators that use an implicit function to model a system of constraints over the current and future states, and which generate a prediction by searching for a future state that respects all constraints.
Both families of simulators—those based on explicit, forward functions versus those which define the dynamics implicitly, via constraints—are widely used in physics, engineering, and graphics. In principle they can model the same types of dynamics, however they differ in how their respective predictions are computed and in practice strike different trade-offs that determine why one or the other is preferred in different domains. For example, explicit methods are popular for large systems with (mostly) independent local effects whose space and time derivatives are relatively smooth, and their accuracy can often be increased by discretizing space and time more finely. Implicit approaches are often preferred for systems with strong interactions, such rigid and stiff dynamics, and more accurate solutions can often be found by using more sophisticated constraint solvers or by increasing the computational budget (e.g., solver iterations) allocated to searching for solutions. In machine learning (ML), there have been rapid advances recently in methods for learning to simulate complex dynamic processes, however almost all (e.g., Sanchez-Gonzalez et al. (2020); Pfaff et al. (2021)) have focused on explicit forward model approaches, with few exceptions (Yang et al., 2020).
Here we present a framework for learning to simulate complex dynamics via constraint satisfaction. Our “Constraint-based Graph Network Simulator” (C-GNS) defines a single scalar-valued constraint function that represents whether a future state satisfies the physical constraints, conditioned on the current and previous states. The constraint function is implemented as a Graph Neural Network (GNN) (Bronstein et al., 2017; Battaglia et al., 2018), which can model systems with rich compositional structure—multiple bodies, complex meshes, etc. To predict the next state via the constraint function’s implicit representation of the dynamics, a gradient-based solver finds a proposed state which satisfies the constraints. We train it through the solver by backpropagation. We also introduce a hybrid approach that proposes and refines the future state using an explicit iterative predictor, rather than solving for learned constraints.
We tested the C-GNS on a variety of challenging physical simulation domains generated by several different simulation engines: simulated rope, bouncing balls, and bouncing irregular rigid shapes (MuJoCo, Todorov et al. (2012)) and splashing fluids (Flex, Macklin et al. (2014)). We found that the C-GNS’s simulated rollouts were more accurate than a state-of-the-art Graph Net Simulator (GNS) (Sanchez-Gonzalez et al., 2020) with comparable number of parameters. At test time, the C-GNS could use additional solver iterations to improve its predictive accuracy, striking desired speed-accuracy trade-offs. It could also satisfy new, hand-designed constraints jointly alongside its learned constraints. Neither of these capabilities are possible in explicit forward-style approaches.
2 BACKGROUND AND RELATED WORK
Constraint solvers are central to many physics simulators. Most rigid-body and game engines use constraints to model joints, collision and contact (Baraff, 1994). They are used for limiting strain in realistic cloth simulation (Thomaszewski et al., 2009), and are a core component in Eulerian incompressible fluid solvers to solve for pressure (Chorin, 1967). Recently, position-based (Müller et al., 2007) and projective dynamics methods (Bouaziz et al., 2014) have become very popular for interactive simulation. These methods express dynamics purely as constraints, and can simulate a wide range of physical systems from rigids over soft-bodies to fluids (Macklin et al., 2014).
Machine learning methods for accelerating scientific simulation of complex systems, such as turbulence (Kochkov et al., 2021; Wang et al., 2020) and aerodynamics (Thuerey et al., 2020; Zhang et al., 2018), have grown rapidly in recent years. GNN-based learned simulators, in particular, are a very flexible approach which can model a wide range of systems, from articulated dynamics (Sanchez-Gonzalez et al., 2018) to particle-based physics (Mrowca et al., 2018; Li et al., 2019; Sanchez-Gonzalez et al., 2020) and mesh-based continuum systems (Pfaff et al., 2021; De Avila Belbute-Peres et al., 2020), and generalize well to unseen scenarios. Combining learning algorithms with principles from physics and numerical methods, such as auxiliary loss terms and rich inductive biases, can improve sample complexity, computational efficiency, and generalization (Wu et al., 2018; Karniadakis et al., 2021; Chen et al., 2018; Rubanova et al., 2019). Imposing Hamiltonian (Greydanus et al., 2019; Sanchez-Gonzalez et al., 2019; Chen et al., 2019) and Lagrangian (Lutter et al., 2019; Cranmer et al., 2020; Finzi et al., 2020) mechanics in learned simulators offers unique speed/accuracy tradeoffs and can preserve symmetries more effectively.
Recent methods have been proposed for learning constraint functions and solving them in a model’s forward pass (Duvenaud et al. (2020)’s “Deep Implicit Layers” tutorial is an excellent hands-on survey). Such models can play games (Amos & Kolter, 2017; Wang et al., 2019), optimize power flow (Donti et al., 2021), support robotic planning (Loula et al., 2020), and perform combinatorial optimization (Bartunov et al., 2020). Solvers such as gradient descent and Newton’s method are differentiable, and support training by backpropagation, but this can be computationally expensive, so approaches such as Deep Equilibrium Models (DEM) (Bai et al., 2019; 2020) use implicit differentiation to compute gradients only at the solution point.
Despite the popularity of constraint-based traditional simulators, only a single simulator which uses learned constraints has been reported (Yang et al., 2020). Their “Neural Projections” method, based on Goldenthal et al. (2007), iteratively proposes a future state with an Euler step, then projects the proposal onto a learned constraint manifold, implemented as a multilayer perceptron (MLP). Crucially, their constraint function only measures how much an individual state violates the learned constraints, and thus is not an implicit representation of the dynamics. It is suitable for quasi-static regimes, but not scenarios such as the elastic collisions in the bowling ball example described above.
Under review as a conference paper at ICLR 2021
<latexit sha1_base64="pfoxs8rsC/moP79HBUVBdU/MhtI=">AAAB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkUI9FLx4r2g9oQ9lsJ+3SzSbsboQS+hO8eFDEq7/Im//GbZuDtj4YeLw3w8y8IBFcG9f9dgobm1vbO8Xd0t7+weFR+fikreNUMWyxWMSqG1CNgktsGW4EdhOFNAoEdoLJ7dzvPKHSPJaPZpqgH9GR5CFn1FjpoTvwBuWKW3UXIOvEy0kFcjQH5a/+MGZphNIwQbXueW5i/Iwqw5nAWamfakwom9AR9iyVNELtZ4tTZ+TCKkMSxsqWNGSh/p7IaKT1NApsZ0TNWK96c/E/r5ea8NrPuExSg5ItF4WpICYm87/JkCtkRkwtoUxxeythY6ooMzadkg3BW315nbSvql6tWruvVRo3eRxFOINzuAQP6tCAO2hCCxiM4Ble4c0Rzovz7nwsWwtOPnMKf+B8/gDdJ42H</latexit>
<latexit sha1_base64="t5Z8j6uw1dDrul9BiPoeucvIxm0=">AAAB6HicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeiF48t2A9oQ9lsJ+3azSbsboQS+gu8eFDEqz/Jm//GbZuDtj4YeLw3w8y8IBFcG9f9dgobm1vbO8Xd0t7+weFR+fikreNUMWyxWMSqG1CNgktsGW4EdhOFNAoEdoLJ3dzvPKHSPJYPZpqgH9GR5CFn1FipqQflilt1FyDrxMtJBXI0BuWv/jBmaYTSMEG17nluYvyMKsOZwFmpn2pMKJvQEfYslTRC7WeLQ2fkwipDEsbKljRkof6eyGik9TQKbGdEzVivenPxP6+XmvDGz7hMUoOSLReFqSAmJvOvyZArZEZMLaFMcXsrYWOqKDM2m5INwVt9eZ20r6perVpr1ir12zyOIpzBOVyCB9dQh3toQAsYIDzDK7w5j86L8+58LFsLTj5zCn/gfP4A4OuM/g==</latexit>
<latexit sha1_base64="HJ/awZ5XFr1cDfruVAeJQcOIixk=">AAAB8HicbVDLSgNBEOz1GeMr6tHLYBA8hV0J6DHoxWME85BkCbOTSTJkZnaZ6RXCkq/w4kERr36ON//GSbIHTSxoKKq66e6KEiks+v63t7a+sbm1Xdgp7u7tHxyWjo6bNk4N4w0Wy9i0I2q5FJo3UKDk7cRwqiLJW9H4dua3nrixItYPOEl4qOhQi4FgFJ302B1RzNrTHvZKZb/iz0FWSZCTMuSo90pf3X7MUsU1Mkmt7QR+gmFGDQom+bTYTS1PKBvTIe84qqniNszmB0/JuVP6ZBAbVxrJXP09kVFl7URFrlNRHNllbyb+53VSHFyHmdBJilyzxaJBKgnGZPY96QvDGcqJI5QZ4W4lbEQNZegyKroQguWXV0nzshJUK9X7arl2k8dRgFM4gwsI4ApqcAd1aAADBc/wCm+e8V68d+9j0brm5TMn8Afe5w8R35CX</latexit>
<latexit sha1_base64="ajLqfCZov3o6FTnfb54h4xEyGI0=">AAAB9HicbVBNS8NAEJ3Ur1q/qh69LBZBEEoiBT0WvXisYGuhDWWz3bZLN5u4OymUkN/hxYMiXv0x3vw3btsctPXBwOO9GWbmBbEUBl332ymsrW9sbhW3Szu7e/sH5cOjlokSzXiTRTLS7YAaLoXiTRQoeTvWnIaB5I/B+HbmP064NiJSDziNuR/SoRIDwShaye+OKKbtrJfihZf1yhW36s5BVomXkwrkaPTKX91+xJKQK2SSGtPx3Bj9lGoUTPKs1E0Mjykb0yHvWKpoyI2fzo/OyJlV+mQQaVsKyVz9PZHS0JhpGNjOkOLILHsz8T+vk+Dg2k+FihPkii0WDRJJMCKzBEhfaM5QTi2hTAt7K2EjqilDm1PJhuAtv7xKWpdVr1at3dcq9Zs8jiKcwCmcgwdXUIc7aEATGDzBM7zCmzNxXpx352PRWnDymWP4A+fzB7oYkhM=</latexit>
<latexit sha1_base64="e4/cShLnpc3G1u2go1/xMgSgs0s=">AAAB8XicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeiF48V7Ae2oWy2m3bpZhN3J0IJ/RdePCji1X/jzX/jts1BWx8MPN6bYWZekEhh0HW/ncLa+sbmVnG7tLO7t39QPjxqmTjVjDdZLGPdCajhUijeRIGSdxLNaRRI3g7GNzO//cS1EbG6x0nC/YgOlQgFo2ilh04/60n+SHDaL1fcqjsHWSVeTiqQo9Evf/UGMUsjrpBJakzXcxP0M6pRMMmnpV5qeELZmA5511JFI278bH7xlJxZZUDCWNtSSObq74mMRsZMosB2RhRHZtmbif953RTDKz8TKkmRK7ZYFKaSYExm75OB0JyhnFhCmRb2VsJGVFOGNqSSDcFbfnmVtC6qXq1au6tV6td5HEU4gVM4Bw8uoQ630IAmMFDwDK/w5hjnxXl3PhatBSefOYY/cD5/AHGkkMY=</latexit>
<latexit sha1_base64="pmdXaQtx/RkbjEEj0JjG94undSA=">AAAB6HicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeiF48t2FZpQ9lsJ+3azSbsboQS+gu8eFDEqz/Jm//GbZuDtj4YeLw3w8y8IBFcG9f9dgpr6xubW8Xt0s7u3v5B+fCoreNUMWyxWMTqPqAaBZfYMtwIvE8U0igQ2AnGNzO/84RK81jemUmCfkSHkoecUWOl5kO/XHGr7hxklXg5qUCORr/81RvELI1QGiao1l3PTYyfUWU4Ezgt9VKNCWVjOsSupZJGqP1sfuiUnFllQMJY2ZKGzNXfExmNtJ5Ege2MqBnpZW8m/ud1UxNe+RmXSWpQssWiMBXExGT2NRlwhcyIiSWUKW5vJWxEFWXGZlOyIXjLL6+S9kXVq1VrzVqlfp3HUYQTOIVz8OAS6nALDWgBA4RneIU359F5cd6dj0VrwclnjuEPnM8fuYOM5A==</latexit>
<latexit sha1_base64="Jxyp7Y07VUyiyNEiRyqcT+XYQDk=">AAAB9HicbVDLSgNBEJz1GeMr6tHLYBA8hV0J6DGoB71FMA9IljA76U2GzD6c6Q2GZb/DiwdFvPox3vwbJ8keNLGgoajqprvLi6XQaNvf1srq2vrGZmGruL2zu7dfOjhs6ihRHBo8kpFqe0yDFCE0UKCEdqyABZ6Elje6nvqtMSgtovABJzG4ARuEwhecoZFcv5d2EZ4wvbnLsl6pbFfsGegycXJSJjnqvdJXtx/xJIAQuWRadxw7RjdlCgWXkBW7iYaY8REbQMfQkAWg3XR2dEZPjdKnfqRMhUhn6u+JlAVaTwLPdAYMh3rRm4r/eZ0E/Us3FWGcIIR8vshPJMWIThOgfaGAo5wYwrgS5lbKh0wxjianognBWXx5mTTPK061Ur2vlmtXeRwFckxOyBlxyAWpkVtSJw3CySN5Jq/kzRpbL9a79TFvXbHymSPyB9bnDziakmY=</latexit>
<latexit sha1_base64="rQRE9g+W9jgLwJeS3fY3jHXSBJo=">AAAB83icbVDLSgNBEJyNrxhfUY9eBoPgKexKQI9BPXiMYB6QXcLspDcZMvtgplcMy/6GFw+KePVnvPk3TpI9aGJBQ1HVTXeXn0ih0ba/rdLa+sbmVnm7srO7t39QPTzq6DhVHNo8lrHq+UyDFBG0UaCEXqKAhb6Erj+5mfndR1BaxNEDThPwQjaKRCA4QyO5wSBzEZ4wu83zQbVm1+056CpxClIjBVqD6pc7jHkaQoRcMq37jp2glzGFgkvIK26qIWF8wkbQNzRiIWgvm9+c0zOjDGkQK1MR0rn6eyJjodbT0DedIcOxXvZm4n9eP8XgystElKQIEV8sClJJMaazAOhQKOAop4YwroS5lfIxU4yjialiQnCWX14lnYu606g37hu15nURR5mckFNyThxySZrkjrRIm3CSkGfySt6s1Hqx3q2PRWvJKmaOyR9Ynz+gO5IT</latexit>
<latexit sha1_base64="kZkYF8749t+4eNJpcL8A/9X3kSM=">AAAB83icbVBNS8NAEJ3Ur1q/qh69BIvgqSRS0GOxF48V7Ac0oWy2m3bpZhN2J2IJ+RtePCji1T/jzX/jts1BWx8MPN6bYWZekAiu0XG+rdLG5tb2Tnm3srd/cHhUPT7p6jhVlHVoLGLVD4hmgkvWQY6C9RPFSBQI1gumrbnfe2RK81g+4CxhfkTGkoecEjSSFw4zD9kTZq08H1ZrTt1ZwF4nbkFqUKA9rH55o5imEZNIBdF64DoJ+hlRyKlgecVLNUsInZIxGxgqScS0ny1uzu0Lo4zsMFamJNoL9fdERiKtZ1FgOiOCE73qzcX/vEGK4Y2fcZmkyCRdLgpTYWNszwOwR1wximJmCKGKm1ttOiGKUDQxVUwI7urL66R7VXcb9cZ9o9a8LeIowxmcwyW4cA1NuIM2dIBCAs/wCm9War1Y79bHsrVkFTOn8AfW5w+etZIS</latexit>
<latexit sha1_base64="e4/cShLnpc3G1u2go1/xMgSgs0s=">AAAB8XicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeiF48V7Ae2oWy2m3bpZhN3J0IJ/RdePCji1X/jzX/jts1BWx8MPN6bYWZekEhh0HW/ncLa+sbmVnG7tLO7t39QPjxqmTjVjDdZLGPdCajhUijeRIGSdxLNaRRI3g7GNzO//cS1EbG6x0nC/YgOlQgFo2ilh04/60n+SHDaL1fcqjsHWSVeTiqQo9Evf/UGMUsjrpBJakzXcxP0M6pRMMmnpV5qeELZmA5511JFI278bH7xlJxZZUDCWNtSSObq74mMRsZMosB2RhRHZtmbif953RTDKz8TKkmRK7ZYFKaSYExm75OB0JyhnFhCmRb2VsJGVFOGNqSSDcFbfnmVtC6qXq1au6tV6td5HEU4gVM4Bw8uoQ630IAmMFDwDK/w5hjnxXl3PhatBSefOYY/cD5/AHGkkMY=</latexit>
<latexit sha1_base64="FgDtDkQujOEF3ZM1C6wBi95/Q5Y=">AAAB7nicbVDLSgNBEOyNrxhfUY9eBoMQL2FXAnoMevEYwTwkWcPspJMMmZ1dZmaFsOQjvHhQxKvf482/cZLsQRMLGoqqbrq7glhwbVz328mtrW9sbuW3Czu7e/sHxcOjpo4SxbDBIhGpdkA1Ci6xYbgR2I4V0jAQ2ArGNzO/9YRK80jem0mMfkiHkg84o8ZKrYfHtMzPp71iya24c5BV4mWkBBnqveJXtx+xJERpmKBadzw3Nn5KleFM4LTQTTTGlI3pEDuWShqi9tP5uVNyZpU+GUTKljRkrv6eSGmo9SQMbGdIzUgvezPxP6+TmMGVn3IZJwYlWywaJIKYiMx+J32ukBkxsYQyxe2thI2ooszYhAo2BG/55VXSvKh41Ur1rlqqXWdx5OEETqEMHlxCDW6hDg1gMIZneIU3J3ZenHfnY9Gac7KZY/gD5/MHvsCPMA==</latexit>
<latexit sha1_base64="e4/cShLnpc3G1u2go1/xMgSgs0s=">AAAB8XicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeiF48V7Ae2oWy2m3bpZhN3J0IJ/RdePCji1X/jzX/jts1BWx8MPN6bYWZekEhh0HW/ncLa+sbmVnG7tLO7t39QPjxqmTjVjDdZLGPdCajhUijeRIGSdxLNaRRI3g7GNzO//cS1EbG6x0nC/YgOlQgFo2ilh04/60n+SHDaL1fcqjsHWSVeTiqQo9Evf/UGMUsjrpBJakzXcxP0M6pRMMmnpV5qeELZmA5511JFI278bH7xlJxZZUDCWNtSSObq74mMRsZMosB2RhRHZtmbif953RTDKz8TKkmRK7ZYFKaSYExm75OB0JyhnFhCmRb2VsJGVFOGNqSSDcFbfnmVtC6qXq1au6tV6td5HEU4gVM4Bw8uoQ630IAmMFDwDK/w5hjnxXl3PhatBSefOYY/cD5/AHGkkMY=</latexit>
<latexit sha1_base64="e4/cShLnpc3G1u2go1/xMgSgs0s=">AAAB8XicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeiF48V7Ae2oWy2m3bpZhN3J0IJ/RdePCji1X/jzX/jts1BWx8MPN6bYWZekEhh0HW/ncLa+sbmVnG7tLO7t39QPjxqmTjVjDdZLGPdCajhUijeRIGSdxLNaRRI3g7GNzO//cS1EbG6x0nC/YgOlQgFo2ilh04/60n+SHDaL1fcqjsHWSVeTiqQo9Evf/UGMUsjrpBJakzXcxP0M6pRMMmnpV5qeELZmA5511JFI278bH7xlJxZZUDCWNtSSObq74mMRsZMosB2RhRHZtmbif953RTDKz8TKkmRK7ZYFKaSYExm75OB0JyhnFhCmRb2VsJGVFOGNqSSDcFbfnmVtC6qXq1au6tV6td5HEU4gVM4Bw8uoQ630IAmMFDwDK/w5hjnxXl3PhatBSefOYY/cD5/AHGkkMY=</latexit>
<latexit sha1_base64="FgDtDkQujOEF3ZM1C6wBi95/Q5Y=">AAAB7nicbVDLSgNBEOyNrxhfUY9eBoMQL2FXAnoMevEYwTwkWcPspJMMmZ1dZmaFsOQjvHhQxKvf482/cZLsQRMLGoqqbrq7glhwbVz328mtrW9sbuW3Czu7e/sHxcOjpo4SxbDBIhGpdkA1Ci6xYbgR2I4V0jAQ2ArGNzO/9YRK80jem0mMfkiHkg84o8ZKrYfHtMzPp71iya24c5BV4mWkBBnqveJXtx+xJERpmKBadzw3Nn5KleFM4LTQTTTGlI3pEDuWShqi9tP5uVNyZpU+GUTKljRkrv6eSGmo9SQMbGdIzUgvezPxP6+TmMGVn3IZJwYlWywaJIKYiMx+J32ukBkxsYQyxe2thI2ooszYhAo2BG/55VXSvKh41Ur1rlqqXWdx5OEETqEMHlxCDW6hDg1gMIZneIU3J3ZenHfnY9Gac7KZY/gD5/MHvsCPMA==</latexit>
<latexit sha1_base64="nk/O4KvoHG7nWgCKsKo+m2fcKfM=">AAAB/nicbVBNS8NAEN3Ur1q/ouLJy2IRPJVECnos9uKxgm2VJoTNdtMu3WzC7kQsoeBf8eJBEa/+Dm/+G7dtDtr6YODx3gwz88JUcA2O822VVlbX1jfKm5Wt7Z3dPXv/oKOTTFHWpolI1F1INBNcsjZwEOwuVYzEoWDdcNSc+t0HpjRP5C2MU+bHZCB5xCkBIwX2kSdJKEhwj6Mg94A9Qt6cTAK76tScGfAycQtSRQVagf3l9ROaxUwCFUTrnuuk4OdEAaeCTSpepllK6IgMWM9QSWKm/Xx2/gSfGqWPo0SZkoBn6u+JnMRaj+PQdMYEhnrRm4r/eb0Moks/5zLNgEk6XxRlAkOCp1ngPleMghgbQqji5lZMh0QRCiaxignBXXx5mXTOa269Vr+pVxtXRRxldIxO0Bly0QVqoGvUQm1EUY6e0St6s56sF+vd+pi3lqxi5hD9gfX5A3C7lc8=</latexit>
<latexit sha1_base64="vQ3/3W0LApHcMPwu9rwFnhCUcHI=">AAAB73icbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeiF48V7Ie0oWw2k3bpZhN3N0Ip/RNePCji1b/jzX/jts1BWx8MPN6bYWZekAqujet+O4W19Y3NreJ2aWd3b/+gfHjU0kmmGDZZIhLVCahGwSU2DTcCO6lCGgcC28HoZua3n1Bpnsh7M07Rj+lA8ogzaqzU6YUoDCUP/XLFrbpzkFXi5aQCORr98lcvTFgWozRMUK27npsaf0KV4UzgtNTLNKaUjegAu5ZKGqP2J/N7p+TMKiGJEmVLGjJXf09MaKz1OA5sZ0zNUC97M/E/r5uZ6MqfcJlmBiVbLIoyQUxCZs+TkCtkRowtoUxxeythQ6ooMzaikg3BW355lbQuql6tWrurVerXeRxFOIFTOAcPLqEOt9CAJjAQ8Ayv8OY8Oi/Ou/OxaC04+cwx/IHz+QOaD4+w</latexit>
<latexit sha1_base64="fAxixLzqPP0ibRTKMP9FMpa2phA=">AAACB3icbZDLSsNAFIZPvNZ6i7oUZLAIlUJJpKAboejGZQV7o41lMpm0QycXZiZCCd258VXcuFDEra/gzrdxmnahrT8MfPznHM6c3405k8qyvo2l5ZXVtfXcRn5za3tn19zbb8goEYTWScQj0XKxpJyFtK6Y4rQVC4oDl9OmO7ye1JsPVEgWhXdqFFMnwP2Q+Yxgpa2eedS+T4usZJ+O0SXKWFMJdT3KFUbtnlmwylYmtAj2DAowU61nfnW9iCQBDRXhWMqObcXKSbFQjHA6zncTSWNMhrhPOxpDHFDppNkdY3SiHQ/5kdAvVChzf0+kOJByFLi6M8BqIOdrE/O/WidR/oWTsjBOFA3JdJGfcKQiNAkFeUxQovhIAyaC6b8iMsACE6Wjy+sQ7PmTF6FxVrYr5cptpVC9msWRg0M4hiLYcA5VuIEa1IHAIzzDK7wZT8aL8W58TFuXjNnMAfyR8fkD8P+W0w==</latexit>
<latexit sha1_base64="fAxixLzqPP0ibRTKMP9FMpa2phA=">AAACB3icbZDLSsNAFIZPvNZ6i7oUZLAIlUJJpKAboejGZQV7o41lMpm0QycXZiZCCd258VXcuFDEra/gzrdxmnahrT8MfPznHM6c3405k8qyvo2l5ZXVtfXcRn5za3tn19zbb8goEYTWScQj0XKxpJyFtK6Y4rQVC4oDl9OmO7ye1JsPVEgWhXdqFFMnwP2Q+Yxgpa2eedS+T4usZJ+O0SXKWFMJdT3KFUbtnlmwylYmtAj2DAowU61nfnW9iCQBDRXhWMqObcXKSbFQjHA6zncTSWNMhrhPOxpDHFDppNkdY3SiHQ/5kdAvVChzf0+kOJByFLi6M8BqIOdrE/O/WidR/oWTsjBOFA3JdJGfcKQiNAkFeUxQovhIAyaC6b8iMsACE6Wjy+sQ7PmTF6FxVrYr5cptpVC9msWRg0M4hiLYcA5VuIEa1IHAIzzDK7wZT8aL8W58TFuXjNnMAfyR8fkD8P+W0w==</latexit>
<latexit sha1_base64="9UY9Tzo0xSguDEfPuYbWHGsRnh0=">AAAB7nicbVDLSgNBEOyNrxhfUY9eBoMQL2FXAnoMevEYwTwkWcPsZJIMmZ1dZnqFsOQjvHhQxKvf482/cZLsQRMLGoqqbrq7glgKg6777eTW1jc2t/LbhZ3dvf2D4uFR00SJZrzBIhnpdkANl0LxBgqUvB1rTsNA8lYwvpn5rSeujYjUPU5i7od0qMRAMIpWaj08pmX3fNorltyKOwdZJV5GSpCh3it+dfsRS0KukElqTMdzY/RTqlEwyaeFbmJ4TNmYDnnHUkVDbvx0fu6UnFmlTwaRtqWQzNXfEykNjZmEge0MKY7MsjcT//M6CQ6u/FSoOEGu2GLRIJEEIzL7nfSF5gzlxBLKtLC3EjaimjK0CRVsCN7yy6ukeVHxqpXqXbVUu87iyMMJnEIZPLiEGtxCHRrAYAzP8ApvTuy8OO/Ox6I152Qzx/AHzucPZ+qO9w==</latexit>
<latexit sha1_base64="4l4tyu1wQNISUpBmeK1HI2A9PJA=">AAAB7nicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkUI9FLx4r2A9pQ9lsN+3SzSbsToQS+iO8eFDEq7/Hm//GbZuDtj4YeLw3w8y8IJHCoOt+O4WNza3tneJuaW//4PCofHzSNnGqGW+xWMa6G1DDpVC8hQIl7yaa0yiQvBNMbud+54lrI2L1gNOE+xEdKREKRtFKnf6YYvY4G5QrbtVdgKwTLycVyNEclL/6w5ilEVfIJDWm57kJ+hnVKJjks1I/NTyhbEJHvGepohE3frY4d0YurDIkYaxtKSQL9fdERiNjplFgOyOKY7PqzcX/vF6K4bWfCZWkyBVbLgpTSTAm89/JUGjOUE4toUwLeythY6opQ5tQyYbgrb68TtpXVa9Wrd3XKo2bPI4inME5XIIHdWjAHTShBQwm8Ayv8OYkzovz7nwsWwtOPnMKf+B8/gCDZ4+x</latexit>
<latexit sha1_base64="ErhthiE4wcyOR5LhKX4LnvmTkNM=">AAAB+HicbVBNS8NAEN34WetHox69LBahXkoiBb0IRS+epIL9oo1ls922SzebsDsRasgv8eJBEa/+FG/+G7dtDtr6YODx3gwz8/xIcA2O822trK6tb2zmtvLbO7t7BXv/oKHDWFFWp6EIVcsnmgkuWR04CNaKFCOBL1jTH19P/eYjU5qH8h4mEfMCMpR8wCkBI/XsQndEIGmnl+2HpHR7mvbsolN2ZsDLxM1IEWWo9eyvbj+kccAkUEG07rhOBF5CFHAqWJrvxppFhI7JkHUMlSRg2ktmh6f4xCh9PAiVKQl4pv6eSEig9STwTWdAYKQXvan4n9eJYXDhJVxGMTBJ54sGscAQ4mkKuM8VoyAmhhCquLkV0xFRhILJKm9CcBdfXiaNs7JbKVfuKsXqVRZHDh2hY1RCLjpHVXSDaqiOKIrRM3pFb9aT9WK9Wx/z1hUrmzlEf2B9/gAd6pK9</latexit>
<latexit sha1_base64="ErhthiE4wcyOR5LhKX4LnvmTkNM=">AAAB+HicbVBNS8NAEN34WetHox69LBahXkoiBb0IRS+epIL9oo1ls922SzebsDsRasgv8eJBEa/+FG/+G7dtDtr6YODx3gwz8/xIcA2O822trK6tb2zmtvLbO7t7BXv/oKHDWFFWp6EIVcsnmgkuWR04CNaKFCOBL1jTH19P/eYjU5qH8h4mEfMCMpR8wCkBI/XsQndEIGmnl+2HpHR7mvbsolN2ZsDLxM1IEWWo9eyvbj+kccAkUEG07rhOBF5CFHAqWJrvxppFhI7JkHUMlSRg2ktmh6f4xCh9PAiVKQl4pv6eSEig9STwTWdAYKQXvan4n9eJYXDhJVxGMTBJ54sGscAQ4mkKuM8VoyAmhhCquLkV0xFRhILJKm9CcBdfXiaNs7JbKVfuKsXqVRZHDh2hY1RCLjpHVXSDaqiOKIrRM3pFb9aT9WK9Wx/z1hUrmzlEf2B9/gAd6pK9</latexit>
<latexit sha1_base64="9UY9Tzo0xSguDEfPuYbWHGsRnh0=">AAAB7nicbVDLSgNBEOyNrxhfUY9eBoMQL2FXAnoMevEYwTwkWcPsZJIMmZ1dZnqFsOQjvHhQxKvf482/cZLsQRMLGoqqbrq7glgKg6777eTW1jc2t/LbhZ3dvf2D4uFR00SJZrzBIhnpdkANl0LxBgqUvB1rTsNA8lYwvpn5rSeujYjUPU5i7od0qMRAMIpWaj08pmX3fNorltyKOwdZJV5GSpCh3it+dfsRS0KukElqTMdzY/RTqlEwyaeFbmJ4TNmYDnnHUkVDbvx0fu6UnFmlTwaRtqWQzNXfEykNjZmEge0MKY7MsjcT//M6CQ6u/FSoOEGu2GLRIJEEIzL7nfSF5gzlxBLKtLC3EjaimjK0CRVsCN7yy6ukeVHxqpXqXbVUu87iyMMJnEIZPLiEGtxCHRrAYAzP8ApvTuy8OO/Ox6I152Qzx/AHzucPZ+qO9w==</latexit>
<latexit sha1_base64="pmdXaQtx/RkbjEEj0JjG94undSA=">AAAB6HicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeiF48t2FZpQ9lsJ+3azSbsboQS+gu8eFDEqz/Jm//GbZuDtj4YeLw3w8y8IBFcG9f9dgpr6xubW8Xt0s7u3v5B+fCoreNUMWyxWMTqPqAaBZfYMtwIvE8U0igQ2AnGNzO/84RK81jemUmCfkSHkoecUWOl5kO/XHGr7hxklXg5qUCORr/81RvELI1QGiao1l3PTYyfUWU4Ezgt9VKNCWVjOsSupZJGqP1sfuiUnFllQMJY2ZKGzNXfExmNtJ5Ege2MqBnpZW8m/ud1UxNe+RmXSWpQssWiMBXExGT2NRlwhcyIiSWUKW5vJWxEFWXGZlOyIXjLL6+S9kXVq1VrzVqlfp3HUYQTOIVz8OAS6nALDWgBA4RneIU359F5cd6dj0VrwclnjuEPnM8fuYOM5A==</latexit>
<latexit sha1_base64="YET3kkOS2zY8mMv2K4bjQDXERhw=">AAAB6HicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeiF48t2A9oQ9lsJ+3azSbsboQS+gu8eFDEqz/Jm//GbZuDtj4YeLw3w8y8IBFcG9f9dgobm1vbO8Xd0t7+weFR+fikreNUMWyxWMSqG1CNgktsGW4EdhOFNAoEdoLJ3dzvPKHSPJYPZpqgH9GR5CFn1FipyQblilt1FyDrxMtJBXI0BuWv/jBmaYTSMEG17nluYvyMKsOZwFmpn2pMKJvQEfYslTRC7WeLQ2fkwipDEsbKljRkof6eyGik9TQKbGdEzVivenPxP6+XmvDGz7hMUoOSLReFqSAmJvOvyZArZEZMLaFMcXsrYWOqKDM2m5INwVt9eZ20r6perVpr1ir12zyOIpzBOVyCB9dQh3toQAsYIDzDK7w5j86L8+58LFsLTj5zCn/gfP4AyKuM7g==</latexit>
<latexit sha1_base64="e4/cShLnpc3G1u2go1/xMgSgs0s=">AAAB8XicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeiF48V7Ae2oWy2m3bpZhN3J0IJ/RdePCji1X/jzX/jts1BWx8MPN6bYWZekEhh0HW/ncLa+sbmVnG7tLO7t39QPjxqmTjVjDdZLGPdCajhUijeRIGSdxLNaRRI3g7GNzO//cS1EbG6x0nC/YgOlQgFo2ilh04/60n+SHDaL1fcqjsHWSVeTiqQo9Evf/UGMUsjrpBJakzXcxP0M6pRMMmnpV5qeELZmA5511JFI278bH7xlJxZZUDCWNtSSObq74mMRsZMosB2RhRHZtmbif953RTDKz8TKkmRK7ZYFKaSYExm75OB0JyhnFhCmRb2VsJGVFOGNqSSDcFbfnmVtC6qXq1au6tV6td5HEU4gVM4Bw8uoQ630IAmMFDwDK/w5hjnxXl3PhatBSefOYY/cD5/AHGkkMY=</latexit>
<latexit sha1_base64="t5Z8j6uw1dDrul9BiPoeucvIxm0=">AAAB6HicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeiF48t2A9oQ9lsJ+3azSbsboQS+gu8eFDEqz/Jm//GbZuDtj4YeLw3w8y8IBFcG9f9dgobm1vbO8Xd0t7+weFR+fikreNUMWyxWMSqG1CNgktsGW4EdhOFNAoEdoLJ3dzvPKHSPJYPZpqgH9GR5CFn1FipqQflilt1FyDrxMtJBXI0BuWv/jBmaYTSMEG17nluYvyMKsOZwFmpn2pMKJvQEfYslTRC7WeLQ2fkwipDEsbKljRkof6eyGik9TQKbGdEzVivenPxP6+XmvDGz7hMUoOSLReFqSAmJvOvyZArZEZMLaFMcXsrYWOqKDM2m5INwVt9eZ20r6perVpr1ir12zyOIpzBOVyCB9dQh3toQAsYIDzDK7w5j86L8+58LFsLTj5zCn/gfP4A4OuM/g==</latexit>
<latexit sha1_base64="NkuDH4GMVAhCg8VuQG4AALMi2Sw=">AAACNHicbVBNSxxBEO3RxJhNNKsec2myBFaIy0xYMBdB9BLIxUBWd9jZDDU9NdrY0zN214jLMD/Kiz8kFwnkEJFc8xvSu+4hfhQ0vH6vXnXXS0olLfn+T29h8dnzpRfLL1uvXq+svmmvrR/aojICB6JQhRkmYFFJjQOSpHBYGoQ8UXiUnO5P9aNzNFYW+htNShzncKxlJgWQo+L2lyhFRcBDvsO3IuWMKfBIQ6IgDnkW1xHhBdX7TdMduovCM07NBx5u8si4uRTX4U74ve7KzaaJ2x2/58+KPwbBHHTYvA7i9o8oLUSVoyahwNpR4Jc0rsGQFAqbVlRZLEGcwjGOHNSQox3Xs6Ub/t4xKc8K444mPmP/d9SQWzvJE9eZA53Yh9qUfEobVZR9GtdSlxWhFncPZZXiVPBpgjyVBgWpiQMgjHR/5eIEDAhyObdcCMHDlR+Dw4+9oN/rf+13dvfmcSyzt+wd67KAbbNd9pkdsAET7JJds9/sxrvyfnm33p+71gVv7tlg98r7+w94eaot</latexit>
3 MODEL FRAMEWORK
Simulation basics A physical trajectory, measured at discrete time intervals, is a sequence of states, (X1, . . . , XT ), where Xt represents properties such as the positions, velocities, masses, etc, of elements of the system. A physical simulator, s, is a function that maps current and/or previous state(s), which we term the context, X≤t, to a predicted future state, X̂t+1 = s(X≤t) (see Figure 1a)1. A simulated physical trajectory, termed a rollout, (Xt, X̂t+1, X̂t+2, . . . ), can be generated by repeatedly applying s to its own predicted state, X̂t+1 = s(X̂≤t).
Simulators are often comprised of a PREDICTOR mechanism which maps the context X≤t to an update value Ŷ , that represents information about the system’s temporal evolution at the current time. Then Ŷ is used by an UPDATER mechanism to update the current state to the next state: X̂t+1 = UPDATER(X≤t, Ŷ ), e.g., updating current positions and velocities represented by Xt with new velocities and accelerations represented by Ŷ , to predict the next state.
Explicit simulators Across science, engineering, and graphics, a popular class of simulators are defined explicitly: the state update Ŷ is predicted directly fromX≤t using an explicit forward function, Ŷ = fD(X≤t), as illustrated in Figure 1b. Among the rapidly growing family of learned simulators, the forward function fD is typically implemented using a neural network (Sanchez-Gonzalez et al., 2020; Pfaff et al., 2021).
Constraint-based implicit simulators Here we explore learned simulators based on implicit formulations of the dynamics. Rather than predicting the desired state directly, as in explicit formulations, our implicit simulator uses a differentiable constraint function, c = fC(X≤t, Ŷ ), where c is a scalar that quantifies how well a proposed state update Ŷ agrees withX≤t. A future prediction is generated by applying a solver, such as an optimization or zero-finding algorithm, to find a Ŷ that satisfies the constraint function, and applying the UPDATER to update Xt to X̂t+1. The fC can represent all the physical constraints in the system, including the time dynamics.
1Despite that physics is Markovian, we use X≤t as input because our framework can also apply to dynamic processes which are non-Markovian. Providing previous states can also often be helpful when there are hidden properties of the system which are only identifiable over a sequence of observed states, and when a state does not represent velocity or momentum information.
As illustrated in Figure 1d, we formulate our constraint-solving procedure via an iterative method that starts with an initial proposal, Y (0). On the i-th iteration, the solver uses the gradient of fC w.r.t. Y at the current proposal to compute a change to the proposal, δY = −λ ∇Y fC(X≤t, Y )|Y=Y (i) . This δY is then used to revise the proposal to, Y (i+1) = Y (i) + δY . This process repeats for N steps, and the final proposal value is treated as the PREDICTOR’s output, Ŷ = Y (N).
Our constraint-based model’s fC is defined as a trainable function approximator which is real-valued and lower bounded at zero, and uses gradient descent to find Ŷ that minimizes it, where λ is a fixed step size. This induces the semantics that the desired Ŷ = arg minY fC(X≤t, Y ).
We also explore a second constraint-solving procedure, inspired by Yang et al. (2020)’s Neural Projections’ use of “fast projection” Goldenthal et al. (2007). Specifically, λ = − fC(X≤t,Y (i))
‖∇Y fC(X≤t,Y )| Y =Y (i)
‖2 . Unlike gradient descent, fast projection is a zero-finding algorithm, so in
this case fC is not lower bounded. This induces the semantics that fC(X≤t, Ŷ ) = 0.
This general formulation of constraint-based learned simulation can be trained by backpropagating loss gradients through the solver loop2. The computational budget of the forward pass can be varied via the number of solver iterations N .
Explicit iterative simulators As a hybrid between forward and constraint-based simulators, we introduced a model which iteratively refines a proposed state update, like in the constraint-based approach described above, but using an explicit function to directly output a δY at each iteration, rather than solving a constraint function (see Figure 1c). See Section 4.3 for details.
4 EXPERIMENTS
4.1 EXPERIMENTAL TASK DOMAINS
We test our framework on a variety of physical environments, shown in Figure 2: ROPE, BOUNCING BALLS and BOUNCING RIGIDS, whose ground truth training and test data were generated by the MuJoCo physics simulator, as well as BOXBATH from Li et al. (2019). These environments demonstrate a diverse set of physical constraints: ‘hard’ constraints (preserving the shape of the rigid object and resolving collisions), and ‘soft’ constraints on fluid movement, handling gravity and preserving the momentum of the rope and bouncing balls. See the Supplementary Materials for details.
4.2 MODEL IMPLEMENTATIONS
Representing the physical system Our experimental domains are physical systems comprised of sets of interacting point-like elements, e.g., objects, particles, mesh vertices, etc. We represent the state as Xt = (p j t )
j=1...|Xt|, where |Xt| is the number of elements, and pjt is the j-th element’s position at time t. There are also other static properties of the physical elements, e.g., masses, material types, etc., which we represent with Z to keep it distinct from the dynamic state information represented by Xt. The input context is X≤t = (Z,Xt−3, Xt−2, Xt−1, Xt).
2Implicit differentiation at the solution point should be applicable as well, and potentially offer computational benefits as mentioned in the Section 2, though we do not explore that here.
In our implementation, Ŷ , represents the predicted changes in position (i.e., the “average velocity” across the time step)3, ŷj = ∆p̂jt+1 = p̂ j t+1 − pjt . The UPDATER then computes X̂t+1 using p̂jt+1 = p j t + ∆p̂ j t+1, where p j t is provided in the input X≤t.
Constructing the input graph Our implementations of the fD, fDI, and fC use GNNs as the function approximators, so we need to pack the context, X≤t, and (for the fDI and fC) the proposed state update information, Y (i), into an input graph, Gt = (Vt, Et). The edges Et represent possible interactions among the elements, such as fully connected edges to represent collisions and rigid attachments in BOUNCING BALLS and BOUNCING RIGIDS, spring constraints in ROPE, and interactions among particles within a fixed connectivity radius in BOXBATH.
We enforced translation-invariance by construction, by never providing absolute positions as input to the models. Instead, the j-th input node’s features are the static properties, and a sequence of the three most recent position changes (i.e. average velocities), vjt = [z j ,∆pjt−2,∆p j t−1,∆p j t ], where, ∆pjt = p j t − pjt−1. For fDI and fC, which also take the solver’s current proposed Y (i), we also concatenate the proposed average velocity from the i-th solver iteration, yj,(i) − pjt , as input. For the input edge feature for an edge that connects from node j to k, we also provide the relative displacement vector between the nodes’ positions, ejkt = p k t − pjt .
GNN-based Encode-Process-Decode core We implemented fD, fDI, and fC using Graph Networks (GN) (Battaglia et al., 2018), arranged in the Encode-Process-Decode architecture, similar to previous work on GN-based learned simulators (Sanchez-Gonzalez et al., 2018; 2020; Pfaff et al., 2021). The Encoder uses two MLPs to encode node and edge features into high-dimensional latent vectors. The Processor applies multiple GNs, with unshared weights, in sequence, with node and edge residual connections at each step. We do not use global updates for the GNs. The Decoder uses an MLP to produce an output for each node.
The fD directly returns Ŷ . The fDI returns a change to the proposed update δY for the current iteration. The fC’s Decoder returns a scalar for each node to produce a constraint value per node {cj |j = 1 . . . |V |}. These node-wise constraint values are averaged to compute a single scalar c constraint for the entire system, c = fC(X≤t, Ŷ ) = 1|V | ∑|V | j=1 c j .
Solving the constraint For fDI and fC we initialize Y (0) = ∆pjt to the most recent average velocity4. We used auto-differentiation in JAX to compute the gradient function,∇Y fC, and the step size λwas specific to the model variant, as described below. During training we used N = 5 solver iterations.
4.3 MODEL VARIANTS
The key questions in this work are whether constraint-based learned simulators can compete with explicit, forward learned simulators, whether implementing the constraint function with GNNs is more effective than with MLPs, and how minima-based constraint functions solved by gradient descent compare to constraints defined as the zeros of a function which are solved by fast projection (Goldenthal et al., 2007). The following model variants allow us to answer these questions.
Forward GNN This is an explicit, forward GNN-based learned simulator based on the GNS models from Sanchez-Gonzalez et al. (2020); Pfaff et al. (2021). It directly predicts the state update Ŷ from the past time points X≤t.
C-GNS Gradient Descent (C-GNS-GD) and C-GNS-Fast Projections (C-GNS-FP) These are our proposed constraint-based GNN models. For the C-GNS-GD, the scalar per-node output cj was squared, to force the overall fC to be non-negative, and a gradient descent solver with a fixed step size, λ = 0.001, was used to minimize it. For C-GNS-FP, the λ was based on “fast projection” (Goldenthal et al., 2007; Yang et al., 2020), as described in Section 3. Supplementary Figure B.5(c-d) shows ablations.
3For BOXBATH we vary a number of modelling choices to best match those in Sanchez-Gonzalez et al. (2020). The major difference is that we set Ŷ to be the average acceleration rather than average velocity. See Supplementary Materials for other differences.
4To ensure analogous information is provided downstream of fD, the update rule also includes the previous average velocity: p̂jt+1 = p j t + ∆p j t + ŷ j
Iterative GNN We implemented a hybrid between the Forward GNN and C-GNS, as shown in Figure 1c. It was identical to the C-GNS models, except its fDI directly predicted proposed state updates as in fD, rather than being computed via the gradients as was done with fC.
ConstraintMLP Gradient Descent (ConstraintMLP-GD) and ConstraintMLP-Fast Projections (ConstraintMLP-FP) These were MLP-based constraint models, which, rather than using GNNs to implement fC, instead concatenated the embeddings of all the input nodes into a single vector and passed them to an MLP implementation of fC. By default, these models cannot handle variable-length inputs, so we padded smaller states with zeros up to the maximum state size. The ConstraintMLP-FP was the MLP analog to our C-GNS-FP, and was similar to Neural Projections (Yang et al., 2020). The ConstraintMLP-GD used gradient descent, and was the MLP analog to our C-GNS-GD. We omit the results for the ConstraintMLP models on BOXBATH (1024 nodes), as MLPs do not generally work well on physical systems with more than a few particles (Battaglia et al., 2016; Sanchez-Gonzalez et al., 2018).
4.4 TRAINING AND EVALUATION
We trained the models to make next-step predictions, by computing the L2 loss between the predicted X̂t+1 and the corresponding ground truth Xt+1, averaged over nodes. All model weights and biases were trained using standard backpropagation with the Adam optimizer.
At test time, we compute 1-step metrics by evaluating the 1-step errors along each point of the ground truth trajectory. We also evaluate rollout errors by iteratively applying the learned model starting from an initial state, over 160 rollout steps, and computing the error between the predicted and ground truth trajectories.
5 RESULTS
Predictive accuracy5 Our experimental results show that our C-GNS-GD’s performance was generally better than the other model variants. Figure 3 compares the different models on 1-step and rollout position MSE (see Supplementary Table B.1 for numerical results). For each dataset, we
5Videos of the model rollouts are available at sites.google.com/view/constraint-based-simulator
used the same number of message-passing steps (MP) for all GN-based models. We used 2 MPs for the ROPE dataset, and 1 MP for all other tasks.
The C-GNS-GD has lower 1-step MSE between the ground truth and predicted positions than other models across all datasets. Qualitatively, we observed that for Forward GNN with a single messagepassing step, the box in BOXBATH “melts” over time, as the forward model cannot preserve its rigid shape (see Videos). The comparable C-GNS-GD, by contrast, maintains the rigidity more effectively. These quantitative results suggest that constraint-based learned simulators are competitive alternative to explicit, forward learned simulators. We generally found that the Iterative GNN was fairly competitive with the C-GNS-GD in overall performance and better than the Forward GNN.
We also found that the C-GNS-FP was generally less stable across seeds, and not as accurate as the C-GNS-GD. The same conclusion holds for ConstraintMLP-FP versus ConstraintMLP-GD. We speculate that the fast projection algorithm may make training challenging because the step size λ is proportional to fC, which may cause poor zero-finding early in training when the fC is not yet informative. Additionally, we find that C-GNS-FP algorithm becomes unstable in the areas with shallow constraint gradients, perhaps because its λ depends on the inverse of the gradient’s norm.
We explored how varying the message-passing steps and solver iterations (N ) influenced the relative performance among the models in our ROPE dataset. Figure 4 shows that the C-GNS-GD generally required fewer parameters and message-passing steps to achieve comparable 1-step MSE to the other models. Supplementary Figure B.3 shows similar results for the rollout MSE. For most combinations of message-passing steps and number of solver iterations, C-GNS-GD (green) outperforms the Iterative GNN (yellow), C-GNS-FP (purple) as well as the Forward GNN (blue) with the same number of MPs (the Forward GNN is not iterative model, so we plot it as a single bar). We hypothesize that the solver iterations in the C-GNS and Iterative GNN may play a similar role to message passing with shared weights .
Interpreting the learned constraints To better understand the learned fC functions in the C-GNSGD, Figure 5 visualizes the node-wise constraint values as a function of Y (proposed average velocity) for different nodes in the ROPE dataset while holding the other nodes’ proposed update Y fixed.
We also overlay the sequence of five points that represent the proposed Y (i) steps from the solver where all nodes were jointly optimized. The figure shows the learned fC has a minimum near the ground truth Y, which the gradient descent steps are able to reach.
Incorporating novel constraints at test time We next explored a unique advantage of the constraint-based model: because the fC measures the degree the physical constraints are violated, we can incorporate additional, hand-designed constraints at test time, and use the model to potentially satisfy them. For the ROPE dataset, we designed three constraint functions that return positive values which increase quadratically as the rope enters different “forbidden” regions of the space: a vertical wall, a horizontal floor, and a disk-shaped region. We weighted these constraint terms by a coefficient hyperparameter and added each of the hand-designed constraints to the learned fC term of C-GNS-GD and ran the forward evaluation of the model.
As shown in Figure 6, the model was able to simulate the dynamics in a way that the corresponding forbidden region was avoided. In some cases, satisfying the joint constraint resulted in unintuitive behaviors, such as the rope links changing in length to adapt to the obstacle (Videos). However, this is to be expected, as the minimum of the joint constraint may not overlap with the minimum of the learned constraint, which is the one that would otherwise guarantee length preservation. For this example we added a further hand-designed constraint that incentivizes maintaining relative distances between nodes. In general this is a powerful example of how constraint-based models can generalize outside their training data, and solve both for the learned dynamics and arbitrary desired constraints.
Generalizing to larger systems via increased solver iterations In principle, iterative and constraint-based simulators should find more accurate solutions by increasing the number of solver iterations, N . We investigated whether the C-GNS-GD and Iterative GNN trained on ROPE could generalize from Ntrain = 5 on which they were trained, to Ntest ∈ [0, 15]. We also analyzed whether increased solver iterations could improve generalize performance from training on ropes with 5−10 nodes, to test ropes with 20 nodes.
Figure 7a (top row) shows that for test ropes that match the 5−10 nodes experienced during training, the Iterative GNN (light blue) overfits very heavily to Ntest = Ntrain = 5: error increases abruptly for N ≤ 4 and N ≥ 6. By contrast, the C-GNS-GD (light red) generalizes much better to different Ntest. Figure 7a (bottom row) shows that for test ropes with 20 nodes, the Iterative GNN again overfits, while the C-GNS-GD can generalize well to longer ropes if Ntest is increased.
We also trained the Iterative GNN and C-GNS-GD with additional loss terms that were applied to the Y (i) on each solver iteration, not only the final one, Ŷ = Y (N). We used an exponential decay factor, α = 0.25, which downweighted this additional loss term more heavily for earlier solver proposals. The dark blue and red curves in Figure 7a show how this additional loss further improves generalization to more solver iterations and larger systems as test time for the Iterative GNN, but especially the C-GNS-GD. Figure 7b visualizes how increasing the solver iterations systematically improves the quality of the long-term rollout accuracy in the ROPE dataset.
Together these results show the C-GNS-GD is effective in making use of additional resources at test time. This opens the exciting possibility of training on small, simple systems, and testing on large, complex systems. See Supplementary Figure B.2 for further details.
6 DISCUSSION
We presented a general-purpose framework for constraint-based learned simulation, where a learned constraint function implicitly represents the dynamics, and future predictions are generated via a constraint solver. We implemented our framework using GNNs as the constraint function and gradient descent as the constraint solver, and tested it in a variety of challenging physical simulation problems. Our results showed that our C-GNS has competitive or better performance compared to previous learned simulators. We demonstrated unique abilities to generalize to novel, hand-designed constraints, and use more solver iterations than experienced during training to improve the accuracy on larger systems.
We can hypothesize about the relationship between explicit, forward learned simulators and implicit, constraint-based ones in terms of the sharing schemes of these architectures. The C-GNS has a stronger inductive bias than the Forward GNN. The transformation of fC in C-GNS effectively ties the parameters in the resulting ∇Y fC function, and the solver iterations are analogous to how a recurrent neural network’s parameters are shared over iterations. In contrast, the message-passing steps in the Forward GNN used in our work are unshared. In principle, the fD of the Forward GNN is more expressive because if given enough depth, after training it could learn to take parameter values that are equivalent to the shared parameters of C-GNS. Our results shown in Figure 4 supports this possibility: the Forward GNN with many more message-passing steps eventually approaches the C-GNS’s performance. Moreover, we speculate the C-GNS’s inductive biases contribute to its advantages in terms of incorporating novel hand-designed constraints and generalizing to more solver iterations and larger systems.
More broadly, the performance, generality and unique advantages of constraint-based learned simulation make it an important new direction in the advancement of machine learning methods for complex simulation problems in science and engineering.
7 REPRODUCIBILITY STATEMENT
We are committed to open-source the model code after the paper is accepted. Also, we are going to open-source the MuJoCo datasets that we generated for this paper. We provide more details on the model implementation as well as the hyperparameters used for each model in the Supplementary Material. | 1. What is the main contribution of the paper regarding simulation via constraint-based approach?
2. What are the strengths and weaknesses of the proposed strategy, particularly in comparison with prior works?
3. How does the method handle constraint satisfaction, especially in cases where the movement violates constraints?
4. What are some suggestions for improvement, such as testing various baselines or considering additional symmetries like rotation invariance?
5. Are there any confusions or contradictions in the current version of the paper that need clarification? | Summary Of The Paper
Review | Summary Of The Paper
This paper presents to simulate physics via a constraint-based approach instead of direct prediction. In particular, the authors first employ GNN taking as input the history positions and dynamics, to capture the interaction between different particles within the system, whose output is considered as the constraint satisfaction scalar. Then, the gradient with respect to the constraint function is applied as the update of the dynamics over a certain number of solver iterations. The experiments are conducted on a variety of challenging physical domains, including simulated ropes, bouncing balls, colliding irregular shapes and splashing fluids.
Review
Strengths:
This paper is overall well written. The authors have clearly demonstrated the pipeline of the proposed strategy.
Although the novelty of combining GNN with constraint projection is weak (see the weaknesses below), it is valuable to check if this method can outperform those typical forward approaches (such as the work by Sanchez-Gonzalez et al., 2020). The experimental evaluations generally serve this purpose.
Weaknesses:
The biggest concern is that the novelty is weak. At its core, this paper applies the pipeline in (Sanchez-Gonzalez et al., 2020), including the strategy by first computing the predictor and then updater, and the usage of GNN for interaction modeling. The main difference is that for the predictor, it replaces the traditional forward prediction with the iterative gradient-based solver of the constraint approximator, which is interesting. Yet, this idea has been proposed by Yang et al. (2020) in terms of the iterative projections along the gradient direction of the constraint network. Although the authors have further augmented the input of the constraint function with Y to take the dynamics into account, this modification seems minor and a straightforward enhancement.
Regarding the constraint satisfaction. The authors first predict the changes in position (Y) and then update the next state X_{t+1} via an Euler integrator. Even the prediction of Y is derived via the constraint solver, the constraint will be broken after the following updater from X_{t} to X_{t+1}, which is problematic to maintain the hard constraints (such as the case in Figure 6 (c) where the movement within walls and floors is forbidden). How does the proposed method tackle this issue? In the work by Yang et al. (2020), the authors use an opposite order by first updating X_{t} and then projecting the positions onto the constraint manifold, which is able to meet any kind of constraints.
For the comments above, there are several important baselines that are not tested in the experiments: 1) using the method by Yang et al. (2020) but with the GNN projector; 2) first updating X_{t} and then predicting Y with other setting unchanged in the current framework; 3) augmenting the explicit simulators (both the iterative and non-iterative versions) with a regulation loss to enforce certain hard-crafted constraints such as the cases in Figure 6 (c).
Other comments:
This paper is almost well organized, but there are still some confusions in the current version. 1.1 In introduction, the authors mention that both families of simulators (explicit forward vs implicit constraint-based). Is this statement discussed in previous papers? Is there any citation? 1.2 In related work, the authors state that the neural projection by Yang et al. (2020) is the first work that uses learned constraints. However, they also introduce that “Recent methods have been proposed for learning constraint functions and solving them in a model’s forward pass” such as “Deep Implicit Layers” and “Deep Equilibrium Models”. These two statements seem self-contradictory. 1.3 In section 4.3, the authors claim that in Figure 3 the C-GNS-GD’s performance was generally better than the other model variants, which is not true. In terms of the rollout MSE, Iterative GNN outperforms C-GNS-GD in three out of four cases. The authors are suggested to provide more explanations here.
It is good that the proposed method is translation-invariance. Yet, besides this symmetry, there are other cases, such as rotation invariance/equivariance. This is important for improving the generalization ability of the simulator, given the fact that if we rotate the input states under a certain angle, the output changes in the same way. Have the authors taken this symmetry into account? |
ICLR | Title
Constraint-based graph network simulator
Abstract
In the rapidly advancing area of learned physical simulators, nearly all methods train a forward model that directly predicts future states from input states. However, many traditional simulation engines use a constraint-based approach instead of direct prediction. Here we present a framework for constraint-based learned simulation, where a scalar constraint function is implemented as a trainable function approximator, and future predictions are computed as the solutions to a constraint satisfaction problem. We implement our method using a graph neural network as the constraint function and gradient descent as the constraint solver. The architecture can be trained by standard backpropagation. We test the model on a variety of challenging physical domains, including simulated ropes, bouncing balls, colliding irregular shapes and splashing fluids. Our model achieves better or comparable performance to top learned simulators. A key advantage of our model is the ability to generalize to more solver iterations at test time to improve the simulation accuracy. We also show how hand-designed constraints can be added at test time to satisfy objectives which were not present in the training data, which is not possible with forward approaches. Our constraint-based framework is applicable to any setting in which forward learned simulators are used, and more generally demonstrates key ways that learned models can leverage popular methods in numerical methods.
N/A
In the rapidly advancing area of learned physical simulators, nearly all methods train a forward model that directly predicts future states from input states. However, many traditional simulation engines use a constraint-based approach instead of direct prediction. Here we present a framework for constraint-based learned simulation, where a scalar constraint function is implemented as a trainable function approximator, and future predictions are computed as the solutions to a constraint satisfaction problem. We implement our method using a graph neural network as the constraint function and gradient descent as the constraint solver. The architecture can be trained by standard backpropagation. We test the model on a variety of challenging physical domains, including simulated ropes, bouncing balls, colliding irregular shapes and splashing fluids. Our model achieves better or comparable performance to top learned simulators. A key advantage of our model is the ability to generalize to more solver iterations at test time to improve the simulation accuracy. We also show how hand-designed constraints can be added at test time to satisfy objectives which were not present in the training data, which is not possible with forward approaches. Our constraint-based framework is applicable to any setting in which forward learned simulators are used, and more generally demonstrates key ways that learned models can leverage popular methods in numerical methods.
1 INTRODUCTION
Consider a bowling ball colliding with a bowling pin. You might explain this event as involving a pair of forces being generated, one which causes the pin to move, and the other which causes the ball to careen away with a different direction and speed. This kind of intuitive cause-and-effect approach is analogous to physical simulators that apply an explicit forward model to calculate a future state directly from the current one, such as when numerically integrating discretized equations of motion.
An alternative, but equally valid, way to explain the collision is in terms of constraint satisfaction: the ball and pin cannot occupy the same location at the same time, and their combined energies and momenta must be conserved, so the post-collision trajectories are the only way the future can unfold without violating these constraints. This constraint-based approach is analogous to physical simulators that use an implicit function to model a system of constraints over the current and future states, and which generate a prediction by searching for a future state that respects all constraints.
Both families of simulators—those based on explicit, forward functions versus those which define the dynamics implicitly, via constraints—are widely used in physics, engineering, and graphics. In principle they can model the same types of dynamics, however they differ in how their respective predictions are computed and in practice strike different trade-offs that determine why one or the other is preferred in different domains. For example, explicit methods are popular for large systems with (mostly) independent local effects whose space and time derivatives are relatively smooth, and their accuracy can often be increased by discretizing space and time more finely. Implicit approaches are often preferred for systems with strong interactions, such rigid and stiff dynamics, and more accurate solutions can often be found by using more sophisticated constraint solvers or by increasing the computational budget (e.g., solver iterations) allocated to searching for solutions. In machine learning (ML), there have been rapid advances recently in methods for learning to simulate complex dynamic processes, however almost all (e.g., Sanchez-Gonzalez et al. (2020); Pfaff et al. (2021)) have focused on explicit forward model approaches, with few exceptions (Yang et al., 2020).
Here we present a framework for learning to simulate complex dynamics via constraint satisfaction. Our “Constraint-based Graph Network Simulator” (C-GNS) defines a single scalar-valued constraint function that represents whether a future state satisfies the physical constraints, conditioned on the current and previous states. The constraint function is implemented as a Graph Neural Network (GNN) (Bronstein et al., 2017; Battaglia et al., 2018), which can model systems with rich compositional structure—multiple bodies, complex meshes, etc. To predict the next state via the constraint function’s implicit representation of the dynamics, a gradient-based solver finds a proposed state which satisfies the constraints. We train it through the solver by backpropagation. We also introduce a hybrid approach that proposes and refines the future state using an explicit iterative predictor, rather than solving for learned constraints.
We tested the C-GNS on a variety of challenging physical simulation domains generated by several different simulation engines: simulated rope, bouncing balls, and bouncing irregular rigid shapes (MuJoCo, Todorov et al. (2012)) and splashing fluids (Flex, Macklin et al. (2014)). We found that the C-GNS’s simulated rollouts were more accurate than a state-of-the-art Graph Net Simulator (GNS) (Sanchez-Gonzalez et al., 2020) with comparable number of parameters. At test time, the C-GNS could use additional solver iterations to improve its predictive accuracy, striking desired speed-accuracy trade-offs. It could also satisfy new, hand-designed constraints jointly alongside its learned constraints. Neither of these capabilities are possible in explicit forward-style approaches.
2 BACKGROUND AND RELATED WORK
Constraint solvers are central to many physics simulators. Most rigid-body and game engines use constraints to model joints, collision and contact (Baraff, 1994). They are used for limiting strain in realistic cloth simulation (Thomaszewski et al., 2009), and are a core component in Eulerian incompressible fluid solvers to solve for pressure (Chorin, 1967). Recently, position-based (Müller et al., 2007) and projective dynamics methods (Bouaziz et al., 2014) have become very popular for interactive simulation. These methods express dynamics purely as constraints, and can simulate a wide range of physical systems from rigids over soft-bodies to fluids (Macklin et al., 2014).
Machine learning methods for accelerating scientific simulation of complex systems, such as turbulence (Kochkov et al., 2021; Wang et al., 2020) and aerodynamics (Thuerey et al., 2020; Zhang et al., 2018), have grown rapidly in recent years. GNN-based learned simulators, in particular, are a very flexible approach which can model a wide range of systems, from articulated dynamics (Sanchez-Gonzalez et al., 2018) to particle-based physics (Mrowca et al., 2018; Li et al., 2019; Sanchez-Gonzalez et al., 2020) and mesh-based continuum systems (Pfaff et al., 2021; De Avila Belbute-Peres et al., 2020), and generalize well to unseen scenarios. Combining learning algorithms with principles from physics and numerical methods, such as auxiliary loss terms and rich inductive biases, can improve sample complexity, computational efficiency, and generalization (Wu et al., 2018; Karniadakis et al., 2021; Chen et al., 2018; Rubanova et al., 2019). Imposing Hamiltonian (Greydanus et al., 2019; Sanchez-Gonzalez et al., 2019; Chen et al., 2019) and Lagrangian (Lutter et al., 2019; Cranmer et al., 2020; Finzi et al., 2020) mechanics in learned simulators offers unique speed/accuracy tradeoffs and can preserve symmetries more effectively.
Recent methods have been proposed for learning constraint functions and solving them in a model’s forward pass (Duvenaud et al. (2020)’s “Deep Implicit Layers” tutorial is an excellent hands-on survey). Such models can play games (Amos & Kolter, 2017; Wang et al., 2019), optimize power flow (Donti et al., 2021), support robotic planning (Loula et al., 2020), and perform combinatorial optimization (Bartunov et al., 2020). Solvers such as gradient descent and Newton’s method are differentiable, and support training by backpropagation, but this can be computationally expensive, so approaches such as Deep Equilibrium Models (DEM) (Bai et al., 2019; 2020) use implicit differentiation to compute gradients only at the solution point.
Despite the popularity of constraint-based traditional simulators, only a single simulator which uses learned constraints has been reported (Yang et al., 2020). Their “Neural Projections” method, based on Goldenthal et al. (2007), iteratively proposes a future state with an Euler step, then projects the proposal onto a learned constraint manifold, implemented as a multilayer perceptron (MLP). Crucially, their constraint function only measures how much an individual state violates the learned constraints, and thus is not an implicit representation of the dynamics. It is suitable for quasi-static regimes, but not scenarios such as the elastic collisions in the bowling ball example described above.
Under review as a conference paper at ICLR 2021
<latexit sha1_base64="pfoxs8rsC/moP79HBUVBdU/MhtI=">AAAB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkUI9FLx4r2g9oQ9lsJ+3SzSbsboQS+hO8eFDEq7/Im//GbZuDtj4YeLw3w8y8IBFcG9f9dgobm1vbO8Xd0t7+weFR+fikreNUMWyxWMSqG1CNgktsGW4EdhOFNAoEdoLJ7dzvPKHSPJaPZpqgH9GR5CFn1FjpoTvwBuWKW3UXIOvEy0kFcjQH5a/+MGZphNIwQbXueW5i/Iwqw5nAWamfakwom9AR9iyVNELtZ4tTZ+TCKkMSxsqWNGSh/p7IaKT1NApsZ0TNWK96c/E/r5ea8NrPuExSg5ItF4WpICYm87/JkCtkRkwtoUxxeythY6ooMzadkg3BW315nbSvql6tWruvVRo3eRxFOINzuAQP6tCAO2hCCxiM4Ble4c0Rzovz7nwsWwtOPnMKf+B8/gDdJ42H</latexit>
<latexit sha1_base64="t5Z8j6uw1dDrul9BiPoeucvIxm0=">AAAB6HicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeiF48t2A9oQ9lsJ+3azSbsboQS+gu8eFDEqz/Jm//GbZuDtj4YeLw3w8y8IBFcG9f9dgobm1vbO8Xd0t7+weFR+fikreNUMWyxWMSqG1CNgktsGW4EdhOFNAoEdoLJ3dzvPKHSPJYPZpqgH9GR5CFn1FipqQflilt1FyDrxMtJBXI0BuWv/jBmaYTSMEG17nluYvyMKsOZwFmpn2pMKJvQEfYslTRC7WeLQ2fkwipDEsbKljRkof6eyGik9TQKbGdEzVivenPxP6+XmvDGz7hMUoOSLReFqSAmJvOvyZArZEZMLaFMcXsrYWOqKDM2m5INwVt9eZ20r6perVpr1ir12zyOIpzBOVyCB9dQh3toQAsYIDzDK7w5j86L8+58LFsLTj5zCn/gfP4A4OuM/g==</latexit>
<latexit sha1_base64="HJ/awZ5XFr1cDfruVAeJQcOIixk=">AAAB8HicbVDLSgNBEOz1GeMr6tHLYBA8hV0J6DHoxWME85BkCbOTSTJkZnaZ6RXCkq/w4kERr36ON//GSbIHTSxoKKq66e6KEiks+v63t7a+sbm1Xdgp7u7tHxyWjo6bNk4N4w0Wy9i0I2q5FJo3UKDk7cRwqiLJW9H4dua3nrixItYPOEl4qOhQi4FgFJ302B1RzNrTHvZKZb/iz0FWSZCTMuSo90pf3X7MUsU1Mkmt7QR+gmFGDQom+bTYTS1PKBvTIe84qqniNszmB0/JuVP6ZBAbVxrJXP09kVFl7URFrlNRHNllbyb+53VSHFyHmdBJilyzxaJBKgnGZPY96QvDGcqJI5QZ4W4lbEQNZegyKroQguWXV0nzshJUK9X7arl2k8dRgFM4gwsI4ApqcAd1aAADBc/wCm+e8V68d+9j0brm5TMn8Afe5w8R35CX</latexit>
<latexit sha1_base64="ajLqfCZov3o6FTnfb54h4xEyGI0=">AAAB9HicbVBNS8NAEJ3Ur1q/qh69LBZBEEoiBT0WvXisYGuhDWWz3bZLN5u4OymUkN/hxYMiXv0x3vw3btsctPXBwOO9GWbmBbEUBl332ymsrW9sbhW3Szu7e/sH5cOjlokSzXiTRTLS7YAaLoXiTRQoeTvWnIaB5I/B+HbmP064NiJSDziNuR/SoRIDwShaye+OKKbtrJfihZf1yhW36s5BVomXkwrkaPTKX91+xJKQK2SSGtPx3Bj9lGoUTPKs1E0Mjykb0yHvWKpoyI2fzo/OyJlV+mQQaVsKyVz9PZHS0JhpGNjOkOLILHsz8T+vk+Dg2k+FihPkii0WDRJJMCKzBEhfaM5QTi2hTAt7K2EjqilDm1PJhuAtv7xKWpdVr1at3dcq9Zs8jiKcwCmcgwdXUIc7aEATGDzBM7zCmzNxXpx352PRWnDymWP4A+fzB7oYkhM=</latexit>
<latexit sha1_base64="e4/cShLnpc3G1u2go1/xMgSgs0s=">AAAB8XicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeiF48V7Ae2oWy2m3bpZhN3J0IJ/RdePCji1X/jzX/jts1BWx8MPN6bYWZekEhh0HW/ncLa+sbmVnG7tLO7t39QPjxqmTjVjDdZLGPdCajhUijeRIGSdxLNaRRI3g7GNzO//cS1EbG6x0nC/YgOlQgFo2ilh04/60n+SHDaL1fcqjsHWSVeTiqQo9Evf/UGMUsjrpBJakzXcxP0M6pRMMmnpV5qeELZmA5511JFI278bH7xlJxZZUDCWNtSSObq74mMRsZMosB2RhRHZtmbif953RTDKz8TKkmRK7ZYFKaSYExm75OB0JyhnFhCmRb2VsJGVFOGNqSSDcFbfnmVtC6qXq1au6tV6td5HEU4gVM4Bw8uoQ630IAmMFDwDK/w5hjnxXl3PhatBSefOYY/cD5/AHGkkMY=</latexit>
<latexit sha1_base64="pmdXaQtx/RkbjEEj0JjG94undSA=">AAAB6HicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeiF48t2FZpQ9lsJ+3azSbsboQS+gu8eFDEqz/Jm//GbZuDtj4YeLw3w8y8IBFcG9f9dgpr6xubW8Xt0s7u3v5B+fCoreNUMWyxWMTqPqAaBZfYMtwIvE8U0igQ2AnGNzO/84RK81jemUmCfkSHkoecUWOl5kO/XHGr7hxklXg5qUCORr/81RvELI1QGiao1l3PTYyfUWU4Ezgt9VKNCWVjOsSupZJGqP1sfuiUnFllQMJY2ZKGzNXfExmNtJ5Ege2MqBnpZW8m/ud1UxNe+RmXSWpQssWiMBXExGT2NRlwhcyIiSWUKW5vJWxEFWXGZlOyIXjLL6+S9kXVq1VrzVqlfp3HUYQTOIVz8OAS6nALDWgBA4RneIU359F5cd6dj0VrwclnjuEPnM8fuYOM5A==</latexit>
<latexit sha1_base64="Jxyp7Y07VUyiyNEiRyqcT+XYQDk=">AAAB9HicbVDLSgNBEJz1GeMr6tHLYBA8hV0J6DGoB71FMA9IljA76U2GzD6c6Q2GZb/DiwdFvPox3vwbJ8keNLGgoajqprvLi6XQaNvf1srq2vrGZmGruL2zu7dfOjhs6ihRHBo8kpFqe0yDFCE0UKCEdqyABZ6Elje6nvqtMSgtovABJzG4ARuEwhecoZFcv5d2EZ4wvbnLsl6pbFfsGegycXJSJjnqvdJXtx/xJIAQuWRadxw7RjdlCgWXkBW7iYaY8REbQMfQkAWg3XR2dEZPjdKnfqRMhUhn6u+JlAVaTwLPdAYMh3rRm4r/eZ0E/Us3FWGcIIR8vshPJMWIThOgfaGAo5wYwrgS5lbKh0wxjianognBWXx5mTTPK061Ur2vlmtXeRwFckxOyBlxyAWpkVtSJw3CySN5Jq/kzRpbL9a79TFvXbHymSPyB9bnDziakmY=</latexit>
<latexit sha1_base64="rQRE9g+W9jgLwJeS3fY3jHXSBJo=">AAAB83icbVDLSgNBEJyNrxhfUY9eBoPgKexKQI9BPXiMYB6QXcLspDcZMvtgplcMy/6GFw+KePVnvPk3TpI9aGJBQ1HVTXeXn0ih0ba/rdLa+sbmVnm7srO7t39QPTzq6DhVHNo8lrHq+UyDFBG0UaCEXqKAhb6Erj+5mfndR1BaxNEDThPwQjaKRCA4QyO5wSBzEZ4wu83zQbVm1+056CpxClIjBVqD6pc7jHkaQoRcMq37jp2glzGFgkvIK26qIWF8wkbQNzRiIWgvm9+c0zOjDGkQK1MR0rn6eyJjodbT0DedIcOxXvZm4n9eP8XgystElKQIEV8sClJJMaazAOhQKOAop4YwroS5lfIxU4yjialiQnCWX14lnYu606g37hu15nURR5mckFNyThxySZrkjrRIm3CSkGfySt6s1Hqx3q2PRWvJKmaOyR9Ynz+gO5IT</latexit>
<latexit sha1_base64="kZkYF8749t+4eNJpcL8A/9X3kSM=">AAAB83icbVBNS8NAEJ3Ur1q/qh69BIvgqSRS0GOxF48V7Ac0oWy2m3bpZhN2J2IJ+RtePCji1T/jzX/jts1BWx8MPN6bYWZekAiu0XG+rdLG5tb2Tnm3srd/cHhUPT7p6jhVlHVoLGLVD4hmgkvWQY6C9RPFSBQI1gumrbnfe2RK81g+4CxhfkTGkoecEjSSFw4zD9kTZq08H1ZrTt1ZwF4nbkFqUKA9rH55o5imEZNIBdF64DoJ+hlRyKlgecVLNUsInZIxGxgqScS0ny1uzu0Lo4zsMFamJNoL9fdERiKtZ1FgOiOCE73qzcX/vEGK4Y2fcZmkyCRdLgpTYWNszwOwR1wximJmCKGKm1ttOiGKUDQxVUwI7urL66R7VXcb9cZ9o9a8LeIowxmcwyW4cA1NuIM2dIBCAs/wCm9War1Y79bHsrVkFTOn8AfW5w+etZIS</latexit>
<latexit sha1_base64="e4/cShLnpc3G1u2go1/xMgSgs0s=">AAAB8XicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeiF48V7Ae2oWy2m3bpZhN3J0IJ/RdePCji1X/jzX/jts1BWx8MPN6bYWZekEhh0HW/ncLa+sbmVnG7tLO7t39QPjxqmTjVjDdZLGPdCajhUijeRIGSdxLNaRRI3g7GNzO//cS1EbG6x0nC/YgOlQgFo2ilh04/60n+SHDaL1fcqjsHWSVeTiqQo9Evf/UGMUsjrpBJakzXcxP0M6pRMMmnpV5qeELZmA5511JFI278bH7xlJxZZUDCWNtSSObq74mMRsZMosB2RhRHZtmbif953RTDKz8TKkmRK7ZYFKaSYExm75OB0JyhnFhCmRb2VsJGVFOGNqSSDcFbfnmVtC6qXq1au6tV6td5HEU4gVM4Bw8uoQ630IAmMFDwDK/w5hjnxXl3PhatBSefOYY/cD5/AHGkkMY=</latexit>
<latexit sha1_base64="FgDtDkQujOEF3ZM1C6wBi95/Q5Y=">AAAB7nicbVDLSgNBEOyNrxhfUY9eBoMQL2FXAnoMevEYwTwkWcPspJMMmZ1dZmaFsOQjvHhQxKvf482/cZLsQRMLGoqqbrq7glhwbVz328mtrW9sbuW3Czu7e/sHxcOjpo4SxbDBIhGpdkA1Ci6xYbgR2I4V0jAQ2ArGNzO/9YRK80jem0mMfkiHkg84o8ZKrYfHtMzPp71iya24c5BV4mWkBBnqveJXtx+xJERpmKBadzw3Nn5KleFM4LTQTTTGlI3pEDuWShqi9tP5uVNyZpU+GUTKljRkrv6eSGmo9SQMbGdIzUgvezPxP6+TmMGVn3IZJwYlWywaJIKYiMx+J32ukBkxsYQyxe2thI2ooszYhAo2BG/55VXSvKh41Ur1rlqqXWdx5OEETqEMHlxCDW6hDg1gMIZneIU3J3ZenHfnY9Gac7KZY/gD5/MHvsCPMA==</latexit>
<latexit sha1_base64="e4/cShLnpc3G1u2go1/xMgSgs0s=">AAAB8XicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeiF48V7Ae2oWy2m3bpZhN3J0IJ/RdePCji1X/jzX/jts1BWx8MPN6bYWZekEhh0HW/ncLa+sbmVnG7tLO7t39QPjxqmTjVjDdZLGPdCajhUijeRIGSdxLNaRRI3g7GNzO//cS1EbG6x0nC/YgOlQgFo2ilh04/60n+SHDaL1fcqjsHWSVeTiqQo9Evf/UGMUsjrpBJakzXcxP0M6pRMMmnpV5qeELZmA5511JFI278bH7xlJxZZUDCWNtSSObq74mMRsZMosB2RhRHZtmbif953RTDKz8TKkmRK7ZYFKaSYExm75OB0JyhnFhCmRb2VsJGVFOGNqSSDcFbfnmVtC6qXq1au6tV6td5HEU4gVM4Bw8uoQ630IAmMFDwDK/w5hjnxXl3PhatBSefOYY/cD5/AHGkkMY=</latexit>
<latexit sha1_base64="e4/cShLnpc3G1u2go1/xMgSgs0s=">AAAB8XicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeiF48V7Ae2oWy2m3bpZhN3J0IJ/RdePCji1X/jzX/jts1BWx8MPN6bYWZekEhh0HW/ncLa+sbmVnG7tLO7t39QPjxqmTjVjDdZLGPdCajhUijeRIGSdxLNaRRI3g7GNzO//cS1EbG6x0nC/YgOlQgFo2ilh04/60n+SHDaL1fcqjsHWSVeTiqQo9Evf/UGMUsjrpBJakzXcxP0M6pRMMmnpV5qeELZmA5511JFI278bH7xlJxZZUDCWNtSSObq74mMRsZMosB2RhRHZtmbif953RTDKz8TKkmRK7ZYFKaSYExm75OB0JyhnFhCmRb2VsJGVFOGNqSSDcFbfnmVtC6qXq1au6tV6td5HEU4gVM4Bw8uoQ630IAmMFDwDK/w5hjnxXl3PhatBSefOYY/cD5/AHGkkMY=</latexit>
<latexit sha1_base64="FgDtDkQujOEF3ZM1C6wBi95/Q5Y=">AAAB7nicbVDLSgNBEOyNrxhfUY9eBoMQL2FXAnoMevEYwTwkWcPspJMMmZ1dZmaFsOQjvHhQxKvf482/cZLsQRMLGoqqbrq7glhwbVz328mtrW9sbuW3Czu7e/sHxcOjpo4SxbDBIhGpdkA1Ci6xYbgR2I4V0jAQ2ArGNzO/9YRK80jem0mMfkiHkg84o8ZKrYfHtMzPp71iya24c5BV4mWkBBnqveJXtx+xJERpmKBadzw3Nn5KleFM4LTQTTTGlI3pEDuWShqi9tP5uVNyZpU+GUTKljRkrv6eSGmo9SQMbGdIzUgvezPxP6+TmMGVn3IZJwYlWywaJIKYiMx+J32ukBkxsYQyxe2thI2ooszYhAo2BG/55VXSvKh41Ur1rlqqXWdx5OEETqEMHlxCDW6hDg1gMIZneIU3J3ZenHfnY9Gac7KZY/gD5/MHvsCPMA==</latexit>
<latexit sha1_base64="nk/O4KvoHG7nWgCKsKo+m2fcKfM=">AAAB/nicbVBNS8NAEN3Ur1q/ouLJy2IRPJVECnos9uKxgm2VJoTNdtMu3WzC7kQsoeBf8eJBEa/+Dm/+G7dtDtr6YODx3gwz88JUcA2O822VVlbX1jfKm5Wt7Z3dPXv/oKOTTFHWpolI1F1INBNcsjZwEOwuVYzEoWDdcNSc+t0HpjRP5C2MU+bHZCB5xCkBIwX2kSdJKEhwj6Mg94A9Qt6cTAK76tScGfAycQtSRQVagf3l9ROaxUwCFUTrnuuk4OdEAaeCTSpepllK6IgMWM9QSWKm/Xx2/gSfGqWPo0SZkoBn6u+JnMRaj+PQdMYEhnrRm4r/eb0Moks/5zLNgEk6XxRlAkOCp1ngPleMghgbQqji5lZMh0QRCiaxignBXXx5mXTOa269Vr+pVxtXRRxldIxO0Bly0QVqoGvUQm1EUY6e0St6s56sF+vd+pi3lqxi5hD9gfX5A3C7lc8=</latexit>
<latexit sha1_base64="vQ3/3W0LApHcMPwu9rwFnhCUcHI=">AAAB73icbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeiF48V7Ie0oWw2k3bpZhN3N0Ip/RNePCji1b/jzX/jts1BWx8MPN6bYWZekAqujet+O4W19Y3NreJ2aWd3b/+gfHjU0kmmGDZZIhLVCahGwSU2DTcCO6lCGgcC28HoZua3n1Bpnsh7M07Rj+lA8ogzaqzU6YUoDCUP/XLFrbpzkFXi5aQCORr98lcvTFgWozRMUK27npsaf0KV4UzgtNTLNKaUjegAu5ZKGqP2J/N7p+TMKiGJEmVLGjJXf09MaKz1OA5sZ0zNUC97M/E/r5uZ6MqfcJlmBiVbLIoyQUxCZs+TkCtkRowtoUxxeythQ6ooMzaikg3BW355lbQuql6tWrurVerXeRxFOIFTOAcPLqEOt9CAJjAQ8Ayv8OY8Oi/Ou/OxaC04+cwx/IHz+QOaD4+w</latexit>
<latexit sha1_base64="fAxixLzqPP0ibRTKMP9FMpa2phA=">AAACB3icbZDLSsNAFIZPvNZ6i7oUZLAIlUJJpKAboejGZQV7o41lMpm0QycXZiZCCd258VXcuFDEra/gzrdxmnahrT8MfPznHM6c3405k8qyvo2l5ZXVtfXcRn5za3tn19zbb8goEYTWScQj0XKxpJyFtK6Y4rQVC4oDl9OmO7ye1JsPVEgWhXdqFFMnwP2Q+Yxgpa2eedS+T4usZJ+O0SXKWFMJdT3KFUbtnlmwylYmtAj2DAowU61nfnW9iCQBDRXhWMqObcXKSbFQjHA6zncTSWNMhrhPOxpDHFDppNkdY3SiHQ/5kdAvVChzf0+kOJByFLi6M8BqIOdrE/O/WidR/oWTsjBOFA3JdJGfcKQiNAkFeUxQovhIAyaC6b8iMsACE6Wjy+sQ7PmTF6FxVrYr5cptpVC9msWRg0M4hiLYcA5VuIEa1IHAIzzDK7wZT8aL8W58TFuXjNnMAfyR8fkD8P+W0w==</latexit>
<latexit sha1_base64="fAxixLzqPP0ibRTKMP9FMpa2phA=">AAACB3icbZDLSsNAFIZPvNZ6i7oUZLAIlUJJpKAboejGZQV7o41lMpm0QycXZiZCCd258VXcuFDEra/gzrdxmnahrT8MfPznHM6c3405k8qyvo2l5ZXVtfXcRn5za3tn19zbb8goEYTWScQj0XKxpJyFtK6Y4rQVC4oDl9OmO7ye1JsPVEgWhXdqFFMnwP2Q+Yxgpa2eedS+T4usZJ+O0SXKWFMJdT3KFUbtnlmwylYmtAj2DAowU61nfnW9iCQBDRXhWMqObcXKSbFQjHA6zncTSWNMhrhPOxpDHFDppNkdY3SiHQ/5kdAvVChzf0+kOJByFLi6M8BqIOdrE/O/WidR/oWTsjBOFA3JdJGfcKQiNAkFeUxQovhIAyaC6b8iMsACE6Wjy+sQ7PmTF6FxVrYr5cptpVC9msWRg0M4hiLYcA5VuIEa1IHAIzzDK7wZT8aL8W58TFuXjNnMAfyR8fkD8P+W0w==</latexit>
<latexit sha1_base64="9UY9Tzo0xSguDEfPuYbWHGsRnh0=">AAAB7nicbVDLSgNBEOyNrxhfUY9eBoMQL2FXAnoMevEYwTwkWcPsZJIMmZ1dZnqFsOQjvHhQxKvf482/cZLsQRMLGoqqbrq7glgKg6777eTW1jc2t/LbhZ3dvf2D4uFR00SJZrzBIhnpdkANl0LxBgqUvB1rTsNA8lYwvpn5rSeujYjUPU5i7od0qMRAMIpWaj08pmX3fNorltyKOwdZJV5GSpCh3it+dfsRS0KukElqTMdzY/RTqlEwyaeFbmJ4TNmYDnnHUkVDbvx0fu6UnFmlTwaRtqWQzNXfEykNjZmEge0MKY7MsjcT//M6CQ6u/FSoOEGu2GLRIJEEIzL7nfSF5gzlxBLKtLC3EjaimjK0CRVsCN7yy6ukeVHxqpXqXbVUu87iyMMJnEIZPLiEGtxCHRrAYAzP8ApvTuy8OO/Ox6I152Qzx/AHzucPZ+qO9w==</latexit>
<latexit sha1_base64="4l4tyu1wQNISUpBmeK1HI2A9PJA=">AAAB7nicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkUI9FLx4r2A9pQ9lsN+3SzSbsToQS+iO8eFDEq7/Hm//GbZuDtj4YeLw3w8y8IJHCoOt+O4WNza3tneJuaW//4PCofHzSNnGqGW+xWMa6G1DDpVC8hQIl7yaa0yiQvBNMbud+54lrI2L1gNOE+xEdKREKRtFKnf6YYvY4G5QrbtVdgKwTLycVyNEclL/6w5ilEVfIJDWm57kJ+hnVKJjks1I/NTyhbEJHvGepohE3frY4d0YurDIkYaxtKSQL9fdERiNjplFgOyOKY7PqzcX/vF6K4bWfCZWkyBVbLgpTSTAm89/JUGjOUE4toUwLeythY6opQ5tQyYbgrb68TtpXVa9Wrd3XKo2bPI4inME5XIIHdWjAHTShBQwm8Ayv8OYkzovz7nwsWwtOPnMKf+B8/gCDZ4+x</latexit>
<latexit sha1_base64="ErhthiE4wcyOR5LhKX4LnvmTkNM=">AAAB+HicbVBNS8NAEN34WetHox69LBahXkoiBb0IRS+epIL9oo1ls922SzebsDsRasgv8eJBEa/+FG/+G7dtDtr6YODx3gwz8/xIcA2O822trK6tb2zmtvLbO7t7BXv/oKHDWFFWp6EIVcsnmgkuWR04CNaKFCOBL1jTH19P/eYjU5qH8h4mEfMCMpR8wCkBI/XsQndEIGmnl+2HpHR7mvbsolN2ZsDLxM1IEWWo9eyvbj+kccAkUEG07rhOBF5CFHAqWJrvxppFhI7JkHUMlSRg2ktmh6f4xCh9PAiVKQl4pv6eSEig9STwTWdAYKQXvan4n9eJYXDhJVxGMTBJ54sGscAQ4mkKuM8VoyAmhhCquLkV0xFRhILJKm9CcBdfXiaNs7JbKVfuKsXqVRZHDh2hY1RCLjpHVXSDaqiOKIrRM3pFb9aT9WK9Wx/z1hUrmzlEf2B9/gAd6pK9</latexit>
<latexit sha1_base64="ErhthiE4wcyOR5LhKX4LnvmTkNM=">AAAB+HicbVBNS8NAEN34WetHox69LBahXkoiBb0IRS+epIL9oo1ls922SzebsDsRasgv8eJBEa/+FG/+G7dtDtr6YODx3gwz8/xIcA2O822trK6tb2zmtvLbO7t7BXv/oKHDWFFWp6EIVcsnmgkuWR04CNaKFCOBL1jTH19P/eYjU5qH8h4mEfMCMpR8wCkBI/XsQndEIGmnl+2HpHR7mvbsolN2ZsDLxM1IEWWo9eyvbj+kccAkUEG07rhOBF5CFHAqWJrvxppFhI7JkHUMlSRg2ktmh6f4xCh9PAiVKQl4pv6eSEig9STwTWdAYKQXvan4n9eJYXDhJVxGMTBJ54sGscAQ4mkKuM8VoyAmhhCquLkV0xFRhILJKm9CcBdfXiaNs7JbKVfuKsXqVRZHDh2hY1RCLjpHVXSDaqiOKIrRM3pFb9aT9WK9Wx/z1hUrmzlEf2B9/gAd6pK9</latexit>
<latexit sha1_base64="9UY9Tzo0xSguDEfPuYbWHGsRnh0=">AAAB7nicbVDLSgNBEOyNrxhfUY9eBoMQL2FXAnoMevEYwTwkWcPsZJIMmZ1dZnqFsOQjvHhQxKvf482/cZLsQRMLGoqqbrq7glgKg6777eTW1jc2t/LbhZ3dvf2D4uFR00SJZrzBIhnpdkANl0LxBgqUvB1rTsNA8lYwvpn5rSeujYjUPU5i7od0qMRAMIpWaj08pmX3fNorltyKOwdZJV5GSpCh3it+dfsRS0KukElqTMdzY/RTqlEwyaeFbmJ4TNmYDnnHUkVDbvx0fu6UnFmlTwaRtqWQzNXfEykNjZmEge0MKY7MsjcT//M6CQ6u/FSoOEGu2GLRIJEEIzL7nfSF5gzlxBLKtLC3EjaimjK0CRVsCN7yy6ukeVHxqpXqXbVUu87iyMMJnEIZPLiEGtxCHRrAYAzP8ApvTuy8OO/Ox6I152Qzx/AHzucPZ+qO9w==</latexit>
<latexit sha1_base64="pmdXaQtx/RkbjEEj0JjG94undSA=">AAAB6HicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeiF48t2FZpQ9lsJ+3azSbsboQS+gu8eFDEqz/Jm//GbZuDtj4YeLw3w8y8IBFcG9f9dgpr6xubW8Xt0s7u3v5B+fCoreNUMWyxWMTqPqAaBZfYMtwIvE8U0igQ2AnGNzO/84RK81jemUmCfkSHkoecUWOl5kO/XHGr7hxklXg5qUCORr/81RvELI1QGiao1l3PTYyfUWU4Ezgt9VKNCWVjOsSupZJGqP1sfuiUnFllQMJY2ZKGzNXfExmNtJ5Ege2MqBnpZW8m/ud1UxNe+RmXSWpQssWiMBXExGT2NRlwhcyIiSWUKW5vJWxEFWXGZlOyIXjLL6+S9kXVq1VrzVqlfp3HUYQTOIVz8OAS6nALDWgBA4RneIU359F5cd6dj0VrwclnjuEPnM8fuYOM5A==</latexit>
<latexit sha1_base64="YET3kkOS2zY8mMv2K4bjQDXERhw=">AAAB6HicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeiF48t2A9oQ9lsJ+3azSbsboQS+gu8eFDEqz/Jm//GbZuDtj4YeLw3w8y8IBFcG9f9dgobm1vbO8Xd0t7+weFR+fikreNUMWyxWMSqG1CNgktsGW4EdhOFNAoEdoLJ3dzvPKHSPJYPZpqgH9GR5CFn1FipyQblilt1FyDrxMtJBXI0BuWv/jBmaYTSMEG17nluYvyMKsOZwFmpn2pMKJvQEfYslTRC7WeLQ2fkwipDEsbKljRkof6eyGik9TQKbGdEzVivenPxP6+XmvDGz7hMUoOSLReFqSAmJvOvyZArZEZMLaFMcXsrYWOqKDM2m5INwVt9eZ20r6perVpr1ir12zyOIpzBOVyCB9dQh3toQAsYIDzDK7w5j86L8+58LFsLTj5zCn/gfP4AyKuM7g==</latexit>
<latexit sha1_base64="e4/cShLnpc3G1u2go1/xMgSgs0s=">AAAB8XicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeiF48V7Ae2oWy2m3bpZhN3J0IJ/RdePCji1X/jzX/jts1BWx8MPN6bYWZekEhh0HW/ncLa+sbmVnG7tLO7t39QPjxqmTjVjDdZLGPdCajhUijeRIGSdxLNaRRI3g7GNzO//cS1EbG6x0nC/YgOlQgFo2ilh04/60n+SHDaL1fcqjsHWSVeTiqQo9Evf/UGMUsjrpBJakzXcxP0M6pRMMmnpV5qeELZmA5511JFI278bH7xlJxZZUDCWNtSSObq74mMRsZMosB2RhRHZtmbif953RTDKz8TKkmRK7ZYFKaSYExm75OB0JyhnFhCmRb2VsJGVFOGNqSSDcFbfnmVtC6qXq1au6tV6td5HEU4gVM4Bw8uoQ630IAmMFDwDK/w5hjnxXl3PhatBSefOYY/cD5/AHGkkMY=</latexit>
<latexit sha1_base64="t5Z8j6uw1dDrul9BiPoeucvIxm0=">AAAB6HicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeiF48t2A9oQ9lsJ+3azSbsboQS+gu8eFDEqz/Jm//GbZuDtj4YeLw3w8y8IBFcG9f9dgobm1vbO8Xd0t7+weFR+fikreNUMWyxWMSqG1CNgktsGW4EdhOFNAoEdoLJ3dzvPKHSPJYPZpqgH9GR5CFn1FipqQflilt1FyDrxMtJBXI0BuWv/jBmaYTSMEG17nluYvyMKsOZwFmpn2pMKJvQEfYslTRC7WeLQ2fkwipDEsbKljRkof6eyGik9TQKbGdEzVivenPxP6+XmvDGz7hMUoOSLReFqSAmJvOvyZArZEZMLaFMcXsrYWOqKDM2m5INwVt9eZ20r6perVpr1ir12zyOIpzBOVyCB9dQh3toQAsYIDzDK7w5j86L8+58LFsLTj5zCn/gfP4A4OuM/g==</latexit>
<latexit sha1_base64="NkuDH4GMVAhCg8VuQG4AALMi2Sw=">AAACNHicbVBNSxxBEO3RxJhNNKsec2myBFaIy0xYMBdB9BLIxUBWd9jZDDU9NdrY0zN214jLMD/Kiz8kFwnkEJFc8xvSu+4hfhQ0vH6vXnXXS0olLfn+T29h8dnzpRfLL1uvXq+svmmvrR/aojICB6JQhRkmYFFJjQOSpHBYGoQ8UXiUnO5P9aNzNFYW+htNShzncKxlJgWQo+L2lyhFRcBDvsO3IuWMKfBIQ6IgDnkW1xHhBdX7TdMduovCM07NBx5u8si4uRTX4U74ve7KzaaJ2x2/58+KPwbBHHTYvA7i9o8oLUSVoyahwNpR4Jc0rsGQFAqbVlRZLEGcwjGOHNSQox3Xs6Ub/t4xKc8K444mPmP/d9SQWzvJE9eZA53Yh9qUfEobVZR9GtdSlxWhFncPZZXiVPBpgjyVBgWpiQMgjHR/5eIEDAhyObdcCMHDlR+Dw4+9oN/rf+13dvfmcSyzt+wd67KAbbNd9pkdsAET7JJds9/sxrvyfnm33p+71gVv7tlg98r7+w94eaot</latexit>
3 MODEL FRAMEWORK
Simulation basics A physical trajectory, measured at discrete time intervals, is a sequence of states, (X1, . . . , XT ), where Xt represents properties such as the positions, velocities, masses, etc, of elements of the system. A physical simulator, s, is a function that maps current and/or previous state(s), which we term the context, X≤t, to a predicted future state, X̂t+1 = s(X≤t) (see Figure 1a)1. A simulated physical trajectory, termed a rollout, (Xt, X̂t+1, X̂t+2, . . . ), can be generated by repeatedly applying s to its own predicted state, X̂t+1 = s(X̂≤t).
Simulators are often comprised of a PREDICTOR mechanism which maps the context X≤t to an update value Ŷ , that represents information about the system’s temporal evolution at the current time. Then Ŷ is used by an UPDATER mechanism to update the current state to the next state: X̂t+1 = UPDATER(X≤t, Ŷ ), e.g., updating current positions and velocities represented by Xt with new velocities and accelerations represented by Ŷ , to predict the next state.
Explicit simulators Across science, engineering, and graphics, a popular class of simulators are defined explicitly: the state update Ŷ is predicted directly fromX≤t using an explicit forward function, Ŷ = fD(X≤t), as illustrated in Figure 1b. Among the rapidly growing family of learned simulators, the forward function fD is typically implemented using a neural network (Sanchez-Gonzalez et al., 2020; Pfaff et al., 2021).
Constraint-based implicit simulators Here we explore learned simulators based on implicit formulations of the dynamics. Rather than predicting the desired state directly, as in explicit formulations, our implicit simulator uses a differentiable constraint function, c = fC(X≤t, Ŷ ), where c is a scalar that quantifies how well a proposed state update Ŷ agrees withX≤t. A future prediction is generated by applying a solver, such as an optimization or zero-finding algorithm, to find a Ŷ that satisfies the constraint function, and applying the UPDATER to update Xt to X̂t+1. The fC can represent all the physical constraints in the system, including the time dynamics.
1Despite that physics is Markovian, we use X≤t as input because our framework can also apply to dynamic processes which are non-Markovian. Providing previous states can also often be helpful when there are hidden properties of the system which are only identifiable over a sequence of observed states, and when a state does not represent velocity or momentum information.
As illustrated in Figure 1d, we formulate our constraint-solving procedure via an iterative method that starts with an initial proposal, Y (0). On the i-th iteration, the solver uses the gradient of fC w.r.t. Y at the current proposal to compute a change to the proposal, δY = −λ ∇Y fC(X≤t, Y )|Y=Y (i) . This δY is then used to revise the proposal to, Y (i+1) = Y (i) + δY . This process repeats for N steps, and the final proposal value is treated as the PREDICTOR’s output, Ŷ = Y (N).
Our constraint-based model’s fC is defined as a trainable function approximator which is real-valued and lower bounded at zero, and uses gradient descent to find Ŷ that minimizes it, where λ is a fixed step size. This induces the semantics that the desired Ŷ = arg minY fC(X≤t, Y ).
We also explore a second constraint-solving procedure, inspired by Yang et al. (2020)’s Neural Projections’ use of “fast projection” Goldenthal et al. (2007). Specifically, λ = − fC(X≤t,Y (i))
‖∇Y fC(X≤t,Y )| Y =Y (i)
‖2 . Unlike gradient descent, fast projection is a zero-finding algorithm, so in
this case fC is not lower bounded. This induces the semantics that fC(X≤t, Ŷ ) = 0.
This general formulation of constraint-based learned simulation can be trained by backpropagating loss gradients through the solver loop2. The computational budget of the forward pass can be varied via the number of solver iterations N .
Explicit iterative simulators As a hybrid between forward and constraint-based simulators, we introduced a model which iteratively refines a proposed state update, like in the constraint-based approach described above, but using an explicit function to directly output a δY at each iteration, rather than solving a constraint function (see Figure 1c). See Section 4.3 for details.
4 EXPERIMENTS
4.1 EXPERIMENTAL TASK DOMAINS
We test our framework on a variety of physical environments, shown in Figure 2: ROPE, BOUNCING BALLS and BOUNCING RIGIDS, whose ground truth training and test data were generated by the MuJoCo physics simulator, as well as BOXBATH from Li et al. (2019). These environments demonstrate a diverse set of physical constraints: ‘hard’ constraints (preserving the shape of the rigid object and resolving collisions), and ‘soft’ constraints on fluid movement, handling gravity and preserving the momentum of the rope and bouncing balls. See the Supplementary Materials for details.
4.2 MODEL IMPLEMENTATIONS
Representing the physical system Our experimental domains are physical systems comprised of sets of interacting point-like elements, e.g., objects, particles, mesh vertices, etc. We represent the state as Xt = (p j t )
j=1...|Xt|, where |Xt| is the number of elements, and pjt is the j-th element’s position at time t. There are also other static properties of the physical elements, e.g., masses, material types, etc., which we represent with Z to keep it distinct from the dynamic state information represented by Xt. The input context is X≤t = (Z,Xt−3, Xt−2, Xt−1, Xt).
2Implicit differentiation at the solution point should be applicable as well, and potentially offer computational benefits as mentioned in the Section 2, though we do not explore that here.
In our implementation, Ŷ , represents the predicted changes in position (i.e., the “average velocity” across the time step)3, ŷj = ∆p̂jt+1 = p̂ j t+1 − pjt . The UPDATER then computes X̂t+1 using p̂jt+1 = p j t + ∆p̂ j t+1, where p j t is provided in the input X≤t.
Constructing the input graph Our implementations of the fD, fDI, and fC use GNNs as the function approximators, so we need to pack the context, X≤t, and (for the fDI and fC) the proposed state update information, Y (i), into an input graph, Gt = (Vt, Et). The edges Et represent possible interactions among the elements, such as fully connected edges to represent collisions and rigid attachments in BOUNCING BALLS and BOUNCING RIGIDS, spring constraints in ROPE, and interactions among particles within a fixed connectivity radius in BOXBATH.
We enforced translation-invariance by construction, by never providing absolute positions as input to the models. Instead, the j-th input node’s features are the static properties, and a sequence of the three most recent position changes (i.e. average velocities), vjt = [z j ,∆pjt−2,∆p j t−1,∆p j t ], where, ∆pjt = p j t − pjt−1. For fDI and fC, which also take the solver’s current proposed Y (i), we also concatenate the proposed average velocity from the i-th solver iteration, yj,(i) − pjt , as input. For the input edge feature for an edge that connects from node j to k, we also provide the relative displacement vector between the nodes’ positions, ejkt = p k t − pjt .
GNN-based Encode-Process-Decode core We implemented fD, fDI, and fC using Graph Networks (GN) (Battaglia et al., 2018), arranged in the Encode-Process-Decode architecture, similar to previous work on GN-based learned simulators (Sanchez-Gonzalez et al., 2018; 2020; Pfaff et al., 2021). The Encoder uses two MLPs to encode node and edge features into high-dimensional latent vectors. The Processor applies multiple GNs, with unshared weights, in sequence, with node and edge residual connections at each step. We do not use global updates for the GNs. The Decoder uses an MLP to produce an output for each node.
The fD directly returns Ŷ . The fDI returns a change to the proposed update δY for the current iteration. The fC’s Decoder returns a scalar for each node to produce a constraint value per node {cj |j = 1 . . . |V |}. These node-wise constraint values are averaged to compute a single scalar c constraint for the entire system, c = fC(X≤t, Ŷ ) = 1|V | ∑|V | j=1 c j .
Solving the constraint For fDI and fC we initialize Y (0) = ∆pjt to the most recent average velocity4. We used auto-differentiation in JAX to compute the gradient function,∇Y fC, and the step size λwas specific to the model variant, as described below. During training we used N = 5 solver iterations.
4.3 MODEL VARIANTS
The key questions in this work are whether constraint-based learned simulators can compete with explicit, forward learned simulators, whether implementing the constraint function with GNNs is more effective than with MLPs, and how minima-based constraint functions solved by gradient descent compare to constraints defined as the zeros of a function which are solved by fast projection (Goldenthal et al., 2007). The following model variants allow us to answer these questions.
Forward GNN This is an explicit, forward GNN-based learned simulator based on the GNS models from Sanchez-Gonzalez et al. (2020); Pfaff et al. (2021). It directly predicts the state update Ŷ from the past time points X≤t.
C-GNS Gradient Descent (C-GNS-GD) and C-GNS-Fast Projections (C-GNS-FP) These are our proposed constraint-based GNN models. For the C-GNS-GD, the scalar per-node output cj was squared, to force the overall fC to be non-negative, and a gradient descent solver with a fixed step size, λ = 0.001, was used to minimize it. For C-GNS-FP, the λ was based on “fast projection” (Goldenthal et al., 2007; Yang et al., 2020), as described in Section 3. Supplementary Figure B.5(c-d) shows ablations.
3For BOXBATH we vary a number of modelling choices to best match those in Sanchez-Gonzalez et al. (2020). The major difference is that we set Ŷ to be the average acceleration rather than average velocity. See Supplementary Materials for other differences.
4To ensure analogous information is provided downstream of fD, the update rule also includes the previous average velocity: p̂jt+1 = p j t + ∆p j t + ŷ j
Iterative GNN We implemented a hybrid between the Forward GNN and C-GNS, as shown in Figure 1c. It was identical to the C-GNS models, except its fDI directly predicted proposed state updates as in fD, rather than being computed via the gradients as was done with fC.
ConstraintMLP Gradient Descent (ConstraintMLP-GD) and ConstraintMLP-Fast Projections (ConstraintMLP-FP) These were MLP-based constraint models, which, rather than using GNNs to implement fC, instead concatenated the embeddings of all the input nodes into a single vector and passed them to an MLP implementation of fC. By default, these models cannot handle variable-length inputs, so we padded smaller states with zeros up to the maximum state size. The ConstraintMLP-FP was the MLP analog to our C-GNS-FP, and was similar to Neural Projections (Yang et al., 2020). The ConstraintMLP-GD used gradient descent, and was the MLP analog to our C-GNS-GD. We omit the results for the ConstraintMLP models on BOXBATH (1024 nodes), as MLPs do not generally work well on physical systems with more than a few particles (Battaglia et al., 2016; Sanchez-Gonzalez et al., 2018).
4.4 TRAINING AND EVALUATION
We trained the models to make next-step predictions, by computing the L2 loss between the predicted X̂t+1 and the corresponding ground truth Xt+1, averaged over nodes. All model weights and biases were trained using standard backpropagation with the Adam optimizer.
At test time, we compute 1-step metrics by evaluating the 1-step errors along each point of the ground truth trajectory. We also evaluate rollout errors by iteratively applying the learned model starting from an initial state, over 160 rollout steps, and computing the error between the predicted and ground truth trajectories.
5 RESULTS
Predictive accuracy5 Our experimental results show that our C-GNS-GD’s performance was generally better than the other model variants. Figure 3 compares the different models on 1-step and rollout position MSE (see Supplementary Table B.1 for numerical results). For each dataset, we
5Videos of the model rollouts are available at sites.google.com/view/constraint-based-simulator
used the same number of message-passing steps (MP) for all GN-based models. We used 2 MPs for the ROPE dataset, and 1 MP for all other tasks.
The C-GNS-GD has lower 1-step MSE between the ground truth and predicted positions than other models across all datasets. Qualitatively, we observed that for Forward GNN with a single messagepassing step, the box in BOXBATH “melts” over time, as the forward model cannot preserve its rigid shape (see Videos). The comparable C-GNS-GD, by contrast, maintains the rigidity more effectively. These quantitative results suggest that constraint-based learned simulators are competitive alternative to explicit, forward learned simulators. We generally found that the Iterative GNN was fairly competitive with the C-GNS-GD in overall performance and better than the Forward GNN.
We also found that the C-GNS-FP was generally less stable across seeds, and not as accurate as the C-GNS-GD. The same conclusion holds for ConstraintMLP-FP versus ConstraintMLP-GD. We speculate that the fast projection algorithm may make training challenging because the step size λ is proportional to fC, which may cause poor zero-finding early in training when the fC is not yet informative. Additionally, we find that C-GNS-FP algorithm becomes unstable in the areas with shallow constraint gradients, perhaps because its λ depends on the inverse of the gradient’s norm.
We explored how varying the message-passing steps and solver iterations (N ) influenced the relative performance among the models in our ROPE dataset. Figure 4 shows that the C-GNS-GD generally required fewer parameters and message-passing steps to achieve comparable 1-step MSE to the other models. Supplementary Figure B.3 shows similar results for the rollout MSE. For most combinations of message-passing steps and number of solver iterations, C-GNS-GD (green) outperforms the Iterative GNN (yellow), C-GNS-FP (purple) as well as the Forward GNN (blue) with the same number of MPs (the Forward GNN is not iterative model, so we plot it as a single bar). We hypothesize that the solver iterations in the C-GNS and Iterative GNN may play a similar role to message passing with shared weights .
Interpreting the learned constraints To better understand the learned fC functions in the C-GNSGD, Figure 5 visualizes the node-wise constraint values as a function of Y (proposed average velocity) for different nodes in the ROPE dataset while holding the other nodes’ proposed update Y fixed.
We also overlay the sequence of five points that represent the proposed Y (i) steps from the solver where all nodes were jointly optimized. The figure shows the learned fC has a minimum near the ground truth Y, which the gradient descent steps are able to reach.
Incorporating novel constraints at test time We next explored a unique advantage of the constraint-based model: because the fC measures the degree the physical constraints are violated, we can incorporate additional, hand-designed constraints at test time, and use the model to potentially satisfy them. For the ROPE dataset, we designed three constraint functions that return positive values which increase quadratically as the rope enters different “forbidden” regions of the space: a vertical wall, a horizontal floor, and a disk-shaped region. We weighted these constraint terms by a coefficient hyperparameter and added each of the hand-designed constraints to the learned fC term of C-GNS-GD and ran the forward evaluation of the model.
As shown in Figure 6, the model was able to simulate the dynamics in a way that the corresponding forbidden region was avoided. In some cases, satisfying the joint constraint resulted in unintuitive behaviors, such as the rope links changing in length to adapt to the obstacle (Videos). However, this is to be expected, as the minimum of the joint constraint may not overlap with the minimum of the learned constraint, which is the one that would otherwise guarantee length preservation. For this example we added a further hand-designed constraint that incentivizes maintaining relative distances between nodes. In general this is a powerful example of how constraint-based models can generalize outside their training data, and solve both for the learned dynamics and arbitrary desired constraints.
Generalizing to larger systems via increased solver iterations In principle, iterative and constraint-based simulators should find more accurate solutions by increasing the number of solver iterations, N . We investigated whether the C-GNS-GD and Iterative GNN trained on ROPE could generalize from Ntrain = 5 on which they were trained, to Ntest ∈ [0, 15]. We also analyzed whether increased solver iterations could improve generalize performance from training on ropes with 5−10 nodes, to test ropes with 20 nodes.
Figure 7a (top row) shows that for test ropes that match the 5−10 nodes experienced during training, the Iterative GNN (light blue) overfits very heavily to Ntest = Ntrain = 5: error increases abruptly for N ≤ 4 and N ≥ 6. By contrast, the C-GNS-GD (light red) generalizes much better to different Ntest. Figure 7a (bottom row) shows that for test ropes with 20 nodes, the Iterative GNN again overfits, while the C-GNS-GD can generalize well to longer ropes if Ntest is increased.
We also trained the Iterative GNN and C-GNS-GD with additional loss terms that were applied to the Y (i) on each solver iteration, not only the final one, Ŷ = Y (N). We used an exponential decay factor, α = 0.25, which downweighted this additional loss term more heavily for earlier solver proposals. The dark blue and red curves in Figure 7a show how this additional loss further improves generalization to more solver iterations and larger systems as test time for the Iterative GNN, but especially the C-GNS-GD. Figure 7b visualizes how increasing the solver iterations systematically improves the quality of the long-term rollout accuracy in the ROPE dataset.
Together these results show the C-GNS-GD is effective in making use of additional resources at test time. This opens the exciting possibility of training on small, simple systems, and testing on large, complex systems. See Supplementary Figure B.2 for further details.
6 DISCUSSION
We presented a general-purpose framework for constraint-based learned simulation, where a learned constraint function implicitly represents the dynamics, and future predictions are generated via a constraint solver. We implemented our framework using GNNs as the constraint function and gradient descent as the constraint solver, and tested it in a variety of challenging physical simulation problems. Our results showed that our C-GNS has competitive or better performance compared to previous learned simulators. We demonstrated unique abilities to generalize to novel, hand-designed constraints, and use more solver iterations than experienced during training to improve the accuracy on larger systems.
We can hypothesize about the relationship between explicit, forward learned simulators and implicit, constraint-based ones in terms of the sharing schemes of these architectures. The C-GNS has a stronger inductive bias than the Forward GNN. The transformation of fC in C-GNS effectively ties the parameters in the resulting ∇Y fC function, and the solver iterations are analogous to how a recurrent neural network’s parameters are shared over iterations. In contrast, the message-passing steps in the Forward GNN used in our work are unshared. In principle, the fD of the Forward GNN is more expressive because if given enough depth, after training it could learn to take parameter values that are equivalent to the shared parameters of C-GNS. Our results shown in Figure 4 supports this possibility: the Forward GNN with many more message-passing steps eventually approaches the C-GNS’s performance. Moreover, we speculate the C-GNS’s inductive biases contribute to its advantages in terms of incorporating novel hand-designed constraints and generalizing to more solver iterations and larger systems.
More broadly, the performance, generality and unique advantages of constraint-based learned simulation make it an important new direction in the advancement of machine learning methods for complex simulation problems in science and engineering.
7 REPRODUCIBILITY STATEMENT
We are committed to open-source the model code after the paper is accepted. Also, we are going to open-source the MuJoCo datasets that we generated for this paper. We provide more details on the model implementation as well as the hyperparameters used for each model in the Supplementary Material. | 1. What is the main contribution of the paper regarding physical simulation and constraint-based inference?
2. What are the strengths and weaknesses of the proposed approach, particularly in its extension to graph neural networks?
3. Do you have any concerns about the method's ability to generalize to static information and time steps?
4. How does the reviewer assess the novelty and impact of the paper compared to prior works, such as [1]?
5. Are there any questions or suggestions regarding the choice of activation functions, layer normalization, and the role of α in improving generalizability? | Summary Of The Paper
Review | Summary Of The Paper
This manuscript proposes to learn the numerical solutions to Lagrangian physical simulation with a constraint-based inference method on graph neural networks. This method involves an iterative update during inference and thus enables test-time dynamical correction. The contributions are: (1) this manuscript builds a scalar predictor to indicate how well the constraint is agreed. (2) this manuscript proposes using the graph neural networks as the backbone to deal with a variable length of the physical domain. They also examine the effectiveness with a bunch of experiments including the state prediction experiment on four different environments and multiple ablation studies towards different hyper-parameters.
Review
This manuscript extends an idea from [1] that the learning of physical simulation can be viewed as a constrained optimization problem. There are a few interesting points:
The authors extend it using graph neural networks. This modification generalizes this method to the indefinite number of states in the physical system.
The authors claim that this method trade-off between inference time and accuracy dynamically so that in the test time they are possible to increase the number of steps for better convergence.
Despite these respectful contributions, I still have a few questions and reservations:
I notice different activation functions are applied to different environments. Are they intentional? Is there a guideline for choosing?
LayerNorm will be affected by the magnitude of the static information. For example, if you simulate under the continuous mechanism, Young's Modulus can be so huge that kills other entries in the vector. I wonder how should the user deal with it.
How does this method generalize to the static information and time step?
Second-order optimization usually provides super-linear convergence when it is close to the optimum, which is a similar case referring to figure 5. I will find a comparison between the proposed first-order method and Newton's or quasi-Newton's method helpful.
The authors claim that adding an
α
improves the generalizability to unseen parameters including articulation length and optimization steps. Is there a reason why it is not a part of the standard model given it has an advantage?
3 and 5 are separately used in different environments. Some discussion on the choice will be helpful (I understand 5 is from previous work). Is it because of the time integration method used in Mujoco?
My major reservation is the technical novelty compared to its preceding work [1]. I find the extend to graph neural networks exciting, but in the meantime, incremental. I wonder if the authors can give me more explanation on the delta between them, maybe supported with a more impressive application.
[1] Yang, Shuqi, Xingzhe He, and Bo Zhu. "Learning Physical Constraints with Neural Projections." Advances in Neural Information Processing Systems 33 (2020): 5178-5189. |
ICLR | Title
Constraint-based graph network simulator
Abstract
In the rapidly advancing area of learned physical simulators, nearly all methods train a forward model that directly predicts future states from input states. However, many traditional simulation engines use a constraint-based approach instead of direct prediction. Here we present a framework for constraint-based learned simulation, where a scalar constraint function is implemented as a trainable function approximator, and future predictions are computed as the solutions to a constraint satisfaction problem. We implement our method using a graph neural network as the constraint function and gradient descent as the constraint solver. The architecture can be trained by standard backpropagation. We test the model on a variety of challenging physical domains, including simulated ropes, bouncing balls, colliding irregular shapes and splashing fluids. Our model achieves better or comparable performance to top learned simulators. A key advantage of our model is the ability to generalize to more solver iterations at test time to improve the simulation accuracy. We also show how hand-designed constraints can be added at test time to satisfy objectives which were not present in the training data, which is not possible with forward approaches. Our constraint-based framework is applicable to any setting in which forward learned simulators are used, and more generally demonstrates key ways that learned models can leverage popular methods in numerical methods.
N/A
In the rapidly advancing area of learned physical simulators, nearly all methods train a forward model that directly predicts future states from input states. However, many traditional simulation engines use a constraint-based approach instead of direct prediction. Here we present a framework for constraint-based learned simulation, where a scalar constraint function is implemented as a trainable function approximator, and future predictions are computed as the solutions to a constraint satisfaction problem. We implement our method using a graph neural network as the constraint function and gradient descent as the constraint solver. The architecture can be trained by standard backpropagation. We test the model on a variety of challenging physical domains, including simulated ropes, bouncing balls, colliding irregular shapes and splashing fluids. Our model achieves better or comparable performance to top learned simulators. A key advantage of our model is the ability to generalize to more solver iterations at test time to improve the simulation accuracy. We also show how hand-designed constraints can be added at test time to satisfy objectives which were not present in the training data, which is not possible with forward approaches. Our constraint-based framework is applicable to any setting in which forward learned simulators are used, and more generally demonstrates key ways that learned models can leverage popular methods in numerical methods.
1 INTRODUCTION
Consider a bowling ball colliding with a bowling pin. You might explain this event as involving a pair of forces being generated, one which causes the pin to move, and the other which causes the ball to careen away with a different direction and speed. This kind of intuitive cause-and-effect approach is analogous to physical simulators that apply an explicit forward model to calculate a future state directly from the current one, such as when numerically integrating discretized equations of motion.
An alternative, but equally valid, way to explain the collision is in terms of constraint satisfaction: the ball and pin cannot occupy the same location at the same time, and their combined energies and momenta must be conserved, so the post-collision trajectories are the only way the future can unfold without violating these constraints. This constraint-based approach is analogous to physical simulators that use an implicit function to model a system of constraints over the current and future states, and which generate a prediction by searching for a future state that respects all constraints.
Both families of simulators—those based on explicit, forward functions versus those which define the dynamics implicitly, via constraints—are widely used in physics, engineering, and graphics. In principle they can model the same types of dynamics, however they differ in how their respective predictions are computed and in practice strike different trade-offs that determine why one or the other is preferred in different domains. For example, explicit methods are popular for large systems with (mostly) independent local effects whose space and time derivatives are relatively smooth, and their accuracy can often be increased by discretizing space and time more finely. Implicit approaches are often preferred for systems with strong interactions, such rigid and stiff dynamics, and more accurate solutions can often be found by using more sophisticated constraint solvers or by increasing the computational budget (e.g., solver iterations) allocated to searching for solutions. In machine learning (ML), there have been rapid advances recently in methods for learning to simulate complex dynamic processes, however almost all (e.g., Sanchez-Gonzalez et al. (2020); Pfaff et al. (2021)) have focused on explicit forward model approaches, with few exceptions (Yang et al., 2020).
Here we present a framework for learning to simulate complex dynamics via constraint satisfaction. Our “Constraint-based Graph Network Simulator” (C-GNS) defines a single scalar-valued constraint function that represents whether a future state satisfies the physical constraints, conditioned on the current and previous states. The constraint function is implemented as a Graph Neural Network (GNN) (Bronstein et al., 2017; Battaglia et al., 2018), which can model systems with rich compositional structure—multiple bodies, complex meshes, etc. To predict the next state via the constraint function’s implicit representation of the dynamics, a gradient-based solver finds a proposed state which satisfies the constraints. We train it through the solver by backpropagation. We also introduce a hybrid approach that proposes and refines the future state using an explicit iterative predictor, rather than solving for learned constraints.
We tested the C-GNS on a variety of challenging physical simulation domains generated by several different simulation engines: simulated rope, bouncing balls, and bouncing irregular rigid shapes (MuJoCo, Todorov et al. (2012)) and splashing fluids (Flex, Macklin et al. (2014)). We found that the C-GNS’s simulated rollouts were more accurate than a state-of-the-art Graph Net Simulator (GNS) (Sanchez-Gonzalez et al., 2020) with comparable number of parameters. At test time, the C-GNS could use additional solver iterations to improve its predictive accuracy, striking desired speed-accuracy trade-offs. It could also satisfy new, hand-designed constraints jointly alongside its learned constraints. Neither of these capabilities are possible in explicit forward-style approaches.
2 BACKGROUND AND RELATED WORK
Constraint solvers are central to many physics simulators. Most rigid-body and game engines use constraints to model joints, collision and contact (Baraff, 1994). They are used for limiting strain in realistic cloth simulation (Thomaszewski et al., 2009), and are a core component in Eulerian incompressible fluid solvers to solve for pressure (Chorin, 1967). Recently, position-based (Müller et al., 2007) and projective dynamics methods (Bouaziz et al., 2014) have become very popular for interactive simulation. These methods express dynamics purely as constraints, and can simulate a wide range of physical systems from rigids over soft-bodies to fluids (Macklin et al., 2014).
Machine learning methods for accelerating scientific simulation of complex systems, such as turbulence (Kochkov et al., 2021; Wang et al., 2020) and aerodynamics (Thuerey et al., 2020; Zhang et al., 2018), have grown rapidly in recent years. GNN-based learned simulators, in particular, are a very flexible approach which can model a wide range of systems, from articulated dynamics (Sanchez-Gonzalez et al., 2018) to particle-based physics (Mrowca et al., 2018; Li et al., 2019; Sanchez-Gonzalez et al., 2020) and mesh-based continuum systems (Pfaff et al., 2021; De Avila Belbute-Peres et al., 2020), and generalize well to unseen scenarios. Combining learning algorithms with principles from physics and numerical methods, such as auxiliary loss terms and rich inductive biases, can improve sample complexity, computational efficiency, and generalization (Wu et al., 2018; Karniadakis et al., 2021; Chen et al., 2018; Rubanova et al., 2019). Imposing Hamiltonian (Greydanus et al., 2019; Sanchez-Gonzalez et al., 2019; Chen et al., 2019) and Lagrangian (Lutter et al., 2019; Cranmer et al., 2020; Finzi et al., 2020) mechanics in learned simulators offers unique speed/accuracy tradeoffs and can preserve symmetries more effectively.
Recent methods have been proposed for learning constraint functions and solving them in a model’s forward pass (Duvenaud et al. (2020)’s “Deep Implicit Layers” tutorial is an excellent hands-on survey). Such models can play games (Amos & Kolter, 2017; Wang et al., 2019), optimize power flow (Donti et al., 2021), support robotic planning (Loula et al., 2020), and perform combinatorial optimization (Bartunov et al., 2020). Solvers such as gradient descent and Newton’s method are differentiable, and support training by backpropagation, but this can be computationally expensive, so approaches such as Deep Equilibrium Models (DEM) (Bai et al., 2019; 2020) use implicit differentiation to compute gradients only at the solution point.
Despite the popularity of constraint-based traditional simulators, only a single simulator which uses learned constraints has been reported (Yang et al., 2020). Their “Neural Projections” method, based on Goldenthal et al. (2007), iteratively proposes a future state with an Euler step, then projects the proposal onto a learned constraint manifold, implemented as a multilayer perceptron (MLP). Crucially, their constraint function only measures how much an individual state violates the learned constraints, and thus is not an implicit representation of the dynamics. It is suitable for quasi-static regimes, but not scenarios such as the elastic collisions in the bowling ball example described above.
Under review as a conference paper at ICLR 2021
<latexit sha1_base64="pfoxs8rsC/moP79HBUVBdU/MhtI=">AAAB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkUI9FLx4r2g9oQ9lsJ+3SzSbsboQS+hO8eFDEq7/Im//GbZuDtj4YeLw3w8y8IBFcG9f9dgobm1vbO8Xd0t7+weFR+fikreNUMWyxWMSqG1CNgktsGW4EdhOFNAoEdoLJ7dzvPKHSPJaPZpqgH9GR5CFn1FjpoTvwBuWKW3UXIOvEy0kFcjQH5a/+MGZphNIwQbXueW5i/Iwqw5nAWamfakwom9AR9iyVNELtZ4tTZ+TCKkMSxsqWNGSh/p7IaKT1NApsZ0TNWK96c/E/r5ea8NrPuExSg5ItF4WpICYm87/JkCtkRkwtoUxxeythY6ooMzadkg3BW315nbSvql6tWruvVRo3eRxFOINzuAQP6tCAO2hCCxiM4Ble4c0Rzovz7nwsWwtOPnMKf+B8/gDdJ42H</latexit>
<latexit sha1_base64="t5Z8j6uw1dDrul9BiPoeucvIxm0=">AAAB6HicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeiF48t2A9oQ9lsJ+3azSbsboQS+gu8eFDEqz/Jm//GbZuDtj4YeLw3w8y8IBFcG9f9dgobm1vbO8Xd0t7+weFR+fikreNUMWyxWMSqG1CNgktsGW4EdhOFNAoEdoLJ3dzvPKHSPJYPZpqgH9GR5CFn1FipqQflilt1FyDrxMtJBXI0BuWv/jBmaYTSMEG17nluYvyMKsOZwFmpn2pMKJvQEfYslTRC7WeLQ2fkwipDEsbKljRkof6eyGik9TQKbGdEzVivenPxP6+XmvDGz7hMUoOSLReFqSAmJvOvyZArZEZMLaFMcXsrYWOqKDM2m5INwVt9eZ20r6perVpr1ir12zyOIpzBOVyCB9dQh3toQAsYIDzDK7w5j86L8+58LFsLTj5zCn/gfP4A4OuM/g==</latexit>
<latexit sha1_base64="HJ/awZ5XFr1cDfruVAeJQcOIixk=">AAAB8HicbVDLSgNBEOz1GeMr6tHLYBA8hV0J6DHoxWME85BkCbOTSTJkZnaZ6RXCkq/w4kERr36ON//GSbIHTSxoKKq66e6KEiks+v63t7a+sbm1Xdgp7u7tHxyWjo6bNk4N4w0Wy9i0I2q5FJo3UKDk7cRwqiLJW9H4dua3nrixItYPOEl4qOhQi4FgFJ302B1RzNrTHvZKZb/iz0FWSZCTMuSo90pf3X7MUsU1Mkmt7QR+gmFGDQom+bTYTS1PKBvTIe84qqniNszmB0/JuVP6ZBAbVxrJXP09kVFl7URFrlNRHNllbyb+53VSHFyHmdBJilyzxaJBKgnGZPY96QvDGcqJI5QZ4W4lbEQNZegyKroQguWXV0nzshJUK9X7arl2k8dRgFM4gwsI4ApqcAd1aAADBc/wCm+e8V68d+9j0brm5TMn8Afe5w8R35CX</latexit>
<latexit sha1_base64="ajLqfCZov3o6FTnfb54h4xEyGI0=">AAAB9HicbVBNS8NAEJ3Ur1q/qh69LBZBEEoiBT0WvXisYGuhDWWz3bZLN5u4OymUkN/hxYMiXv0x3vw3btsctPXBwOO9GWbmBbEUBl332ymsrW9sbhW3Szu7e/sH5cOjlokSzXiTRTLS7YAaLoXiTRQoeTvWnIaB5I/B+HbmP064NiJSDziNuR/SoRIDwShaye+OKKbtrJfihZf1yhW36s5BVomXkwrkaPTKX91+xJKQK2SSGtPx3Bj9lGoUTPKs1E0Mjykb0yHvWKpoyI2fzo/OyJlV+mQQaVsKyVz9PZHS0JhpGNjOkOLILHsz8T+vk+Dg2k+FihPkii0WDRJJMCKzBEhfaM5QTi2hTAt7K2EjqilDm1PJhuAtv7xKWpdVr1at3dcq9Zs8jiKcwCmcgwdXUIc7aEATGDzBM7zCmzNxXpx352PRWnDymWP4A+fzB7oYkhM=</latexit>
<latexit sha1_base64="e4/cShLnpc3G1u2go1/xMgSgs0s=">AAAB8XicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeiF48V7Ae2oWy2m3bpZhN3J0IJ/RdePCji1X/jzX/jts1BWx8MPN6bYWZekEhh0HW/ncLa+sbmVnG7tLO7t39QPjxqmTjVjDdZLGPdCajhUijeRIGSdxLNaRRI3g7GNzO//cS1EbG6x0nC/YgOlQgFo2ilh04/60n+SHDaL1fcqjsHWSVeTiqQo9Evf/UGMUsjrpBJakzXcxP0M6pRMMmnpV5qeELZmA5511JFI278bH7xlJxZZUDCWNtSSObq74mMRsZMosB2RhRHZtmbif953RTDKz8TKkmRK7ZYFKaSYExm75OB0JyhnFhCmRb2VsJGVFOGNqSSDcFbfnmVtC6qXq1au6tV6td5HEU4gVM4Bw8uoQ630IAmMFDwDK/w5hjnxXl3PhatBSefOYY/cD5/AHGkkMY=</latexit>
<latexit sha1_base64="pmdXaQtx/RkbjEEj0JjG94undSA=">AAAB6HicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeiF48t2FZpQ9lsJ+3azSbsboQS+gu8eFDEqz/Jm//GbZuDtj4YeLw3w8y8IBFcG9f9dgpr6xubW8Xt0s7u3v5B+fCoreNUMWyxWMTqPqAaBZfYMtwIvE8U0igQ2AnGNzO/84RK81jemUmCfkSHkoecUWOl5kO/XHGr7hxklXg5qUCORr/81RvELI1QGiao1l3PTYyfUWU4Ezgt9VKNCWVjOsSupZJGqP1sfuiUnFllQMJY2ZKGzNXfExmNtJ5Ege2MqBnpZW8m/ud1UxNe+RmXSWpQssWiMBXExGT2NRlwhcyIiSWUKW5vJWxEFWXGZlOyIXjLL6+S9kXVq1VrzVqlfp3HUYQTOIVz8OAS6nALDWgBA4RneIU359F5cd6dj0VrwclnjuEPnM8fuYOM5A==</latexit>
<latexit sha1_base64="Jxyp7Y07VUyiyNEiRyqcT+XYQDk=">AAAB9HicbVDLSgNBEJz1GeMr6tHLYBA8hV0J6DGoB71FMA9IljA76U2GzD6c6Q2GZb/DiwdFvPox3vwbJ8keNLGgoajqprvLi6XQaNvf1srq2vrGZmGruL2zu7dfOjhs6ihRHBo8kpFqe0yDFCE0UKCEdqyABZ6Elje6nvqtMSgtovABJzG4ARuEwhecoZFcv5d2EZ4wvbnLsl6pbFfsGegycXJSJjnqvdJXtx/xJIAQuWRadxw7RjdlCgWXkBW7iYaY8REbQMfQkAWg3XR2dEZPjdKnfqRMhUhn6u+JlAVaTwLPdAYMh3rRm4r/eZ0E/Us3FWGcIIR8vshPJMWIThOgfaGAo5wYwrgS5lbKh0wxjianognBWXx5mTTPK061Ur2vlmtXeRwFckxOyBlxyAWpkVtSJw3CySN5Jq/kzRpbL9a79TFvXbHymSPyB9bnDziakmY=</latexit>
<latexit sha1_base64="rQRE9g+W9jgLwJeS3fY3jHXSBJo=">AAAB83icbVDLSgNBEJyNrxhfUY9eBoPgKexKQI9BPXiMYB6QXcLspDcZMvtgplcMy/6GFw+KePVnvPk3TpI9aGJBQ1HVTXeXn0ih0ba/rdLa+sbmVnm7srO7t39QPTzq6DhVHNo8lrHq+UyDFBG0UaCEXqKAhb6Erj+5mfndR1BaxNEDThPwQjaKRCA4QyO5wSBzEZ4wu83zQbVm1+056CpxClIjBVqD6pc7jHkaQoRcMq37jp2glzGFgkvIK26qIWF8wkbQNzRiIWgvm9+c0zOjDGkQK1MR0rn6eyJjodbT0DedIcOxXvZm4n9eP8XgystElKQIEV8sClJJMaazAOhQKOAop4YwroS5lfIxU4yjialiQnCWX14lnYu606g37hu15nURR5mckFNyThxySZrkjrRIm3CSkGfySt6s1Hqx3q2PRWvJKmaOyR9Ynz+gO5IT</latexit>
<latexit sha1_base64="kZkYF8749t+4eNJpcL8A/9X3kSM=">AAAB83icbVBNS8NAEJ3Ur1q/qh69BIvgqSRS0GOxF48V7Ac0oWy2m3bpZhN2J2IJ+RtePCji1T/jzX/jts1BWx8MPN6bYWZekAiu0XG+rdLG5tb2Tnm3srd/cHhUPT7p6jhVlHVoLGLVD4hmgkvWQY6C9RPFSBQI1gumrbnfe2RK81g+4CxhfkTGkoecEjSSFw4zD9kTZq08H1ZrTt1ZwF4nbkFqUKA9rH55o5imEZNIBdF64DoJ+hlRyKlgecVLNUsInZIxGxgqScS0ny1uzu0Lo4zsMFamJNoL9fdERiKtZ1FgOiOCE73qzcX/vEGK4Y2fcZmkyCRdLgpTYWNszwOwR1wximJmCKGKm1ttOiGKUDQxVUwI7urL66R7VXcb9cZ9o9a8LeIowxmcwyW4cA1NuIM2dIBCAs/wCm9War1Y79bHsrVkFTOn8AfW5w+etZIS</latexit>
<latexit sha1_base64="e4/cShLnpc3G1u2go1/xMgSgs0s=">AAAB8XicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeiF48V7Ae2oWy2m3bpZhN3J0IJ/RdePCji1X/jzX/jts1BWx8MPN6bYWZekEhh0HW/ncLa+sbmVnG7tLO7t39QPjxqmTjVjDdZLGPdCajhUijeRIGSdxLNaRRI3g7GNzO//cS1EbG6x0nC/YgOlQgFo2ilh04/60n+SHDaL1fcqjsHWSVeTiqQo9Evf/UGMUsjrpBJakzXcxP0M6pRMMmnpV5qeELZmA5511JFI278bH7xlJxZZUDCWNtSSObq74mMRsZMosB2RhRHZtmbif953RTDKz8TKkmRK7ZYFKaSYExm75OB0JyhnFhCmRb2VsJGVFOGNqSSDcFbfnmVtC6qXq1au6tV6td5HEU4gVM4Bw8uoQ630IAmMFDwDK/w5hjnxXl3PhatBSefOYY/cD5/AHGkkMY=</latexit>
<latexit sha1_base64="FgDtDkQujOEF3ZM1C6wBi95/Q5Y=">AAAB7nicbVDLSgNBEOyNrxhfUY9eBoMQL2FXAnoMevEYwTwkWcPspJMMmZ1dZmaFsOQjvHhQxKvf482/cZLsQRMLGoqqbrq7glhwbVz328mtrW9sbuW3Czu7e/sHxcOjpo4SxbDBIhGpdkA1Ci6xYbgR2I4V0jAQ2ArGNzO/9YRK80jem0mMfkiHkg84o8ZKrYfHtMzPp71iya24c5BV4mWkBBnqveJXtx+xJERpmKBadzw3Nn5KleFM4LTQTTTGlI3pEDuWShqi9tP5uVNyZpU+GUTKljRkrv6eSGmo9SQMbGdIzUgvezPxP6+TmMGVn3IZJwYlWywaJIKYiMx+J32ukBkxsYQyxe2thI2ooszYhAo2BG/55VXSvKh41Ur1rlqqXWdx5OEETqEMHlxCDW6hDg1gMIZneIU3J3ZenHfnY9Gac7KZY/gD5/MHvsCPMA==</latexit>
<latexit sha1_base64="e4/cShLnpc3G1u2go1/xMgSgs0s=">AAAB8XicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeiF48V7Ae2oWy2m3bpZhN3J0IJ/RdePCji1X/jzX/jts1BWx8MPN6bYWZekEhh0HW/ncLa+sbmVnG7tLO7t39QPjxqmTjVjDdZLGPdCajhUijeRIGSdxLNaRRI3g7GNzO//cS1EbG6x0nC/YgOlQgFo2ilh04/60n+SHDaL1fcqjsHWSVeTiqQo9Evf/UGMUsjrpBJakzXcxP0M6pRMMmnpV5qeELZmA5511JFI278bH7xlJxZZUDCWNtSSObq74mMRsZMosB2RhRHZtmbif953RTDKz8TKkmRK7ZYFKaSYExm75OB0JyhnFhCmRb2VsJGVFOGNqSSDcFbfnmVtC6qXq1au6tV6td5HEU4gVM4Bw8uoQ630IAmMFDwDK/w5hjnxXl3PhatBSefOYY/cD5/AHGkkMY=</latexit>
<latexit sha1_base64="e4/cShLnpc3G1u2go1/xMgSgs0s=">AAAB8XicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeiF48V7Ae2oWy2m3bpZhN3J0IJ/RdePCji1X/jzX/jts1BWx8MPN6bYWZekEhh0HW/ncLa+sbmVnG7tLO7t39QPjxqmTjVjDdZLGPdCajhUijeRIGSdxLNaRRI3g7GNzO//cS1EbG6x0nC/YgOlQgFo2ilh04/60n+SHDaL1fcqjsHWSVeTiqQo9Evf/UGMUsjrpBJakzXcxP0M6pRMMmnpV5qeELZmA5511JFI278bH7xlJxZZUDCWNtSSObq74mMRsZMosB2RhRHZtmbif953RTDKz8TKkmRK7ZYFKaSYExm75OB0JyhnFhCmRb2VsJGVFOGNqSSDcFbfnmVtC6qXq1au6tV6td5HEU4gVM4Bw8uoQ630IAmMFDwDK/w5hjnxXl3PhatBSefOYY/cD5/AHGkkMY=</latexit>
<latexit sha1_base64="FgDtDkQujOEF3ZM1C6wBi95/Q5Y=">AAAB7nicbVDLSgNBEOyNrxhfUY9eBoMQL2FXAnoMevEYwTwkWcPspJMMmZ1dZmaFsOQjvHhQxKvf482/cZLsQRMLGoqqbrq7glhwbVz328mtrW9sbuW3Czu7e/sHxcOjpo4SxbDBIhGpdkA1Ci6xYbgR2I4V0jAQ2ArGNzO/9YRK80jem0mMfkiHkg84o8ZKrYfHtMzPp71iya24c5BV4mWkBBnqveJXtx+xJERpmKBadzw3Nn5KleFM4LTQTTTGlI3pEDuWShqi9tP5uVNyZpU+GUTKljRkrv6eSGmo9SQMbGdIzUgvezPxP6+TmMGVn3IZJwYlWywaJIKYiMx+J32ukBkxsYQyxe2thI2ooszYhAo2BG/55VXSvKh41Ur1rlqqXWdx5OEETqEMHlxCDW6hDg1gMIZneIU3J3ZenHfnY9Gac7KZY/gD5/MHvsCPMA==</latexit>
<latexit sha1_base64="nk/O4KvoHG7nWgCKsKo+m2fcKfM=">AAAB/nicbVBNS8NAEN3Ur1q/ouLJy2IRPJVECnos9uKxgm2VJoTNdtMu3WzC7kQsoeBf8eJBEa/+Dm/+G7dtDtr6YODx3gwz88JUcA2O822VVlbX1jfKm5Wt7Z3dPXv/oKOTTFHWpolI1F1INBNcsjZwEOwuVYzEoWDdcNSc+t0HpjRP5C2MU+bHZCB5xCkBIwX2kSdJKEhwj6Mg94A9Qt6cTAK76tScGfAycQtSRQVagf3l9ROaxUwCFUTrnuuk4OdEAaeCTSpepllK6IgMWM9QSWKm/Xx2/gSfGqWPo0SZkoBn6u+JnMRaj+PQdMYEhnrRm4r/eb0Moks/5zLNgEk6XxRlAkOCp1ngPleMghgbQqji5lZMh0QRCiaxignBXXx5mXTOa269Vr+pVxtXRRxldIxO0Bly0QVqoGvUQm1EUY6e0St6s56sF+vd+pi3lqxi5hD9gfX5A3C7lc8=</latexit>
<latexit sha1_base64="vQ3/3W0LApHcMPwu9rwFnhCUcHI=">AAAB73icbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeiF48V7Ie0oWw2k3bpZhN3N0Ip/RNePCji1b/jzX/jts1BWx8MPN6bYWZekAqujet+O4W19Y3NreJ2aWd3b/+gfHjU0kmmGDZZIhLVCahGwSU2DTcCO6lCGgcC28HoZua3n1Bpnsh7M07Rj+lA8ogzaqzU6YUoDCUP/XLFrbpzkFXi5aQCORr98lcvTFgWozRMUK27npsaf0KV4UzgtNTLNKaUjegAu5ZKGqP2J/N7p+TMKiGJEmVLGjJXf09MaKz1OA5sZ0zNUC97M/E/r5uZ6MqfcJlmBiVbLIoyQUxCZs+TkCtkRowtoUxxeythQ6ooMzaikg3BW355lbQuql6tWrurVerXeRxFOIFTOAcPLqEOt9CAJjAQ8Ayv8OY8Oi/Ou/OxaC04+cwx/IHz+QOaD4+w</latexit>
<latexit sha1_base64="fAxixLzqPP0ibRTKMP9FMpa2phA=">AAACB3icbZDLSsNAFIZPvNZ6i7oUZLAIlUJJpKAboejGZQV7o41lMpm0QycXZiZCCd258VXcuFDEra/gzrdxmnahrT8MfPznHM6c3405k8qyvo2l5ZXVtfXcRn5za3tn19zbb8goEYTWScQj0XKxpJyFtK6Y4rQVC4oDl9OmO7ye1JsPVEgWhXdqFFMnwP2Q+Yxgpa2eedS+T4usZJ+O0SXKWFMJdT3KFUbtnlmwylYmtAj2DAowU61nfnW9iCQBDRXhWMqObcXKSbFQjHA6zncTSWNMhrhPOxpDHFDppNkdY3SiHQ/5kdAvVChzf0+kOJByFLi6M8BqIOdrE/O/WidR/oWTsjBOFA3JdJGfcKQiNAkFeUxQovhIAyaC6b8iMsACE6Wjy+sQ7PmTF6FxVrYr5cptpVC9msWRg0M4hiLYcA5VuIEa1IHAIzzDK7wZT8aL8W58TFuXjNnMAfyR8fkD8P+W0w==</latexit>
<latexit sha1_base64="fAxixLzqPP0ibRTKMP9FMpa2phA=">AAACB3icbZDLSsNAFIZPvNZ6i7oUZLAIlUJJpKAboejGZQV7o41lMpm0QycXZiZCCd258VXcuFDEra/gzrdxmnahrT8MfPznHM6c3405k8qyvo2l5ZXVtfXcRn5za3tn19zbb8goEYTWScQj0XKxpJyFtK6Y4rQVC4oDl9OmO7ye1JsPVEgWhXdqFFMnwP2Q+Yxgpa2eedS+T4usZJ+O0SXKWFMJdT3KFUbtnlmwylYmtAj2DAowU61nfnW9iCQBDRXhWMqObcXKSbFQjHA6zncTSWNMhrhPOxpDHFDppNkdY3SiHQ/5kdAvVChzf0+kOJByFLi6M8BqIOdrE/O/WidR/oWTsjBOFA3JdJGfcKQiNAkFeUxQovhIAyaC6b8iMsACE6Wjy+sQ7PmTF6FxVrYr5cptpVC9msWRg0M4hiLYcA5VuIEa1IHAIzzDK7wZT8aL8W58TFuXjNnMAfyR8fkD8P+W0w==</latexit>
<latexit sha1_base64="9UY9Tzo0xSguDEfPuYbWHGsRnh0=">AAAB7nicbVDLSgNBEOyNrxhfUY9eBoMQL2FXAnoMevEYwTwkWcPsZJIMmZ1dZnqFsOQjvHhQxKvf482/cZLsQRMLGoqqbrq7glgKg6777eTW1jc2t/LbhZ3dvf2D4uFR00SJZrzBIhnpdkANl0LxBgqUvB1rTsNA8lYwvpn5rSeujYjUPU5i7od0qMRAMIpWaj08pmX3fNorltyKOwdZJV5GSpCh3it+dfsRS0KukElqTMdzY/RTqlEwyaeFbmJ4TNmYDnnHUkVDbvx0fu6UnFmlTwaRtqWQzNXfEykNjZmEge0MKY7MsjcT//M6CQ6u/FSoOEGu2GLRIJEEIzL7nfSF5gzlxBLKtLC3EjaimjK0CRVsCN7yy6ukeVHxqpXqXbVUu87iyMMJnEIZPLiEGtxCHRrAYAzP8ApvTuy8OO/Ox6I152Qzx/AHzucPZ+qO9w==</latexit>
<latexit sha1_base64="4l4tyu1wQNISUpBmeK1HI2A9PJA=">AAAB7nicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkUI9FLx4r2A9pQ9lsN+3SzSbsToQS+iO8eFDEq7/Hm//GbZuDtj4YeLw3w8y8IJHCoOt+O4WNza3tneJuaW//4PCofHzSNnGqGW+xWMa6G1DDpVC8hQIl7yaa0yiQvBNMbud+54lrI2L1gNOE+xEdKREKRtFKnf6YYvY4G5QrbtVdgKwTLycVyNEclL/6w5ilEVfIJDWm57kJ+hnVKJjks1I/NTyhbEJHvGepohE3frY4d0YurDIkYaxtKSQL9fdERiNjplFgOyOKY7PqzcX/vF6K4bWfCZWkyBVbLgpTSTAm89/JUGjOUE4toUwLeythY6opQ5tQyYbgrb68TtpXVa9Wrd3XKo2bPI4inME5XIIHdWjAHTShBQwm8Ayv8OYkzovz7nwsWwtOPnMKf+B8/gCDZ4+x</latexit>
<latexit sha1_base64="ErhthiE4wcyOR5LhKX4LnvmTkNM=">AAAB+HicbVBNS8NAEN34WetHox69LBahXkoiBb0IRS+epIL9oo1ls922SzebsDsRasgv8eJBEa/+FG/+G7dtDtr6YODx3gwz8/xIcA2O822trK6tb2zmtvLbO7t7BXv/oKHDWFFWp6EIVcsnmgkuWR04CNaKFCOBL1jTH19P/eYjU5qH8h4mEfMCMpR8wCkBI/XsQndEIGmnl+2HpHR7mvbsolN2ZsDLxM1IEWWo9eyvbj+kccAkUEG07rhOBF5CFHAqWJrvxppFhI7JkHUMlSRg2ktmh6f4xCh9PAiVKQl4pv6eSEig9STwTWdAYKQXvan4n9eJYXDhJVxGMTBJ54sGscAQ4mkKuM8VoyAmhhCquLkV0xFRhILJKm9CcBdfXiaNs7JbKVfuKsXqVRZHDh2hY1RCLjpHVXSDaqiOKIrRM3pFb9aT9WK9Wx/z1hUrmzlEf2B9/gAd6pK9</latexit>
<latexit sha1_base64="ErhthiE4wcyOR5LhKX4LnvmTkNM=">AAAB+HicbVBNS8NAEN34WetHox69LBahXkoiBb0IRS+epIL9oo1ls922SzebsDsRasgv8eJBEa/+FG/+G7dtDtr6YODx3gwz8/xIcA2O822trK6tb2zmtvLbO7t7BXv/oKHDWFFWp6EIVcsnmgkuWR04CNaKFCOBL1jTH19P/eYjU5qH8h4mEfMCMpR8wCkBI/XsQndEIGmnl+2HpHR7mvbsolN2ZsDLxM1IEWWo9eyvbj+kccAkUEG07rhOBF5CFHAqWJrvxppFhI7JkHUMlSRg2ktmh6f4xCh9PAiVKQl4pv6eSEig9STwTWdAYKQXvan4n9eJYXDhJVxGMTBJ54sGscAQ4mkKuM8VoyAmhhCquLkV0xFRhILJKm9CcBdfXiaNs7JbKVfuKsXqVRZHDh2hY1RCLjpHVXSDaqiOKIrRM3pFb9aT9WK9Wx/z1hUrmzlEf2B9/gAd6pK9</latexit>
<latexit sha1_base64="9UY9Tzo0xSguDEfPuYbWHGsRnh0=">AAAB7nicbVDLSgNBEOyNrxhfUY9eBoMQL2FXAnoMevEYwTwkWcPsZJIMmZ1dZnqFsOQjvHhQxKvf482/cZLsQRMLGoqqbrq7glgKg6777eTW1jc2t/LbhZ3dvf2D4uFR00SJZrzBIhnpdkANl0LxBgqUvB1rTsNA8lYwvpn5rSeujYjUPU5i7od0qMRAMIpWaj08pmX3fNorltyKOwdZJV5GSpCh3it+dfsRS0KukElqTMdzY/RTqlEwyaeFbmJ4TNmYDnnHUkVDbvx0fu6UnFmlTwaRtqWQzNXfEykNjZmEge0MKY7MsjcT//M6CQ6u/FSoOEGu2GLRIJEEIzL7nfSF5gzlxBLKtLC3EjaimjK0CRVsCN7yy6ukeVHxqpXqXbVUu87iyMMJnEIZPLiEGtxCHRrAYAzP8ApvTuy8OO/Ox6I152Qzx/AHzucPZ+qO9w==</latexit>
<latexit sha1_base64="pmdXaQtx/RkbjEEj0JjG94undSA=">AAAB6HicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeiF48t2FZpQ9lsJ+3azSbsboQS+gu8eFDEqz/Jm//GbZuDtj4YeLw3w8y8IBFcG9f9dgpr6xubW8Xt0s7u3v5B+fCoreNUMWyxWMTqPqAaBZfYMtwIvE8U0igQ2AnGNzO/84RK81jemUmCfkSHkoecUWOl5kO/XHGr7hxklXg5qUCORr/81RvELI1QGiao1l3PTYyfUWU4Ezgt9VKNCWVjOsSupZJGqP1sfuiUnFllQMJY2ZKGzNXfExmNtJ5Ege2MqBnpZW8m/ud1UxNe+RmXSWpQssWiMBXExGT2NRlwhcyIiSWUKW5vJWxEFWXGZlOyIXjLL6+S9kXVq1VrzVqlfp3HUYQTOIVz8OAS6nALDWgBA4RneIU359F5cd6dj0VrwclnjuEPnM8fuYOM5A==</latexit>
<latexit sha1_base64="YET3kkOS2zY8mMv2K4bjQDXERhw=">AAAB6HicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeiF48t2A9oQ9lsJ+3azSbsboQS+gu8eFDEqz/Jm//GbZuDtj4YeLw3w8y8IBFcG9f9dgobm1vbO8Xd0t7+weFR+fikreNUMWyxWMSqG1CNgktsGW4EdhOFNAoEdoLJ3dzvPKHSPJYPZpqgH9GR5CFn1FipyQblilt1FyDrxMtJBXI0BuWv/jBmaYTSMEG17nluYvyMKsOZwFmpn2pMKJvQEfYslTRC7WeLQ2fkwipDEsbKljRkof6eyGik9TQKbGdEzVivenPxP6+XmvDGz7hMUoOSLReFqSAmJvOvyZArZEZMLaFMcXsrYWOqKDM2m5INwVt9eZ20r6perVpr1ir12zyOIpzBOVyCB9dQh3toQAsYIDzDK7w5j86L8+58LFsLTj5zCn/gfP4AyKuM7g==</latexit>
<latexit sha1_base64="e4/cShLnpc3G1u2go1/xMgSgs0s=">AAAB8XicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeiF48V7Ae2oWy2m3bpZhN3J0IJ/RdePCji1X/jzX/jts1BWx8MPN6bYWZekEhh0HW/ncLa+sbmVnG7tLO7t39QPjxqmTjVjDdZLGPdCajhUijeRIGSdxLNaRRI3g7GNzO//cS1EbG6x0nC/YgOlQgFo2ilh04/60n+SHDaL1fcqjsHWSVeTiqQo9Evf/UGMUsjrpBJakzXcxP0M6pRMMmnpV5qeELZmA5511JFI278bH7xlJxZZUDCWNtSSObq74mMRsZMosB2RhRHZtmbif953RTDKz8TKkmRK7ZYFKaSYExm75OB0JyhnFhCmRb2VsJGVFOGNqSSDcFbfnmVtC6qXq1au6tV6td5HEU4gVM4Bw8uoQ630IAmMFDwDK/w5hjnxXl3PhatBSefOYY/cD5/AHGkkMY=</latexit>
<latexit sha1_base64="t5Z8j6uw1dDrul9BiPoeucvIxm0=">AAAB6HicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeiF48t2A9oQ9lsJ+3azSbsboQS+gu8eFDEqz/Jm//GbZuDtj4YeLw3w8y8IBFcG9f9dgobm1vbO8Xd0t7+weFR+fikreNUMWyxWMSqG1CNgktsGW4EdhOFNAoEdoLJ3dzvPKHSPJYPZpqgH9GR5CFn1FipqQflilt1FyDrxMtJBXI0BuWv/jBmaYTSMEG17nluYvyMKsOZwFmpn2pMKJvQEfYslTRC7WeLQ2fkwipDEsbKljRkof6eyGik9TQKbGdEzVivenPxP6+XmvDGz7hMUoOSLReFqSAmJvOvyZArZEZMLaFMcXsrYWOqKDM2m5INwVt9eZ20r6perVpr1ir12zyOIpzBOVyCB9dQh3toQAsYIDzDK7w5j86L8+58LFsLTj5zCn/gfP4A4OuM/g==</latexit>
<latexit sha1_base64="NkuDH4GMVAhCg8VuQG4AALMi2Sw=">AAACNHicbVBNSxxBEO3RxJhNNKsec2myBFaIy0xYMBdB9BLIxUBWd9jZDDU9NdrY0zN214jLMD/Kiz8kFwnkEJFc8xvSu+4hfhQ0vH6vXnXXS0olLfn+T29h8dnzpRfLL1uvXq+svmmvrR/aojICB6JQhRkmYFFJjQOSpHBYGoQ8UXiUnO5P9aNzNFYW+htNShzncKxlJgWQo+L2lyhFRcBDvsO3IuWMKfBIQ6IgDnkW1xHhBdX7TdMduovCM07NBx5u8si4uRTX4U74ve7KzaaJ2x2/58+KPwbBHHTYvA7i9o8oLUSVoyahwNpR4Jc0rsGQFAqbVlRZLEGcwjGOHNSQox3Xs6Ub/t4xKc8K444mPmP/d9SQWzvJE9eZA53Yh9qUfEobVZR9GtdSlxWhFncPZZXiVPBpgjyVBgWpiQMgjHR/5eIEDAhyObdcCMHDlR+Dw4+9oN/rf+13dvfmcSyzt+wd67KAbbNd9pkdsAET7JJds9/sxrvyfnm33p+71gVv7tlg98r7+w94eaot</latexit>
3 MODEL FRAMEWORK
Simulation basics A physical trajectory, measured at discrete time intervals, is a sequence of states, (X1, . . . , XT ), where Xt represents properties such as the positions, velocities, masses, etc, of elements of the system. A physical simulator, s, is a function that maps current and/or previous state(s), which we term the context, X≤t, to a predicted future state, X̂t+1 = s(X≤t) (see Figure 1a)1. A simulated physical trajectory, termed a rollout, (Xt, X̂t+1, X̂t+2, . . . ), can be generated by repeatedly applying s to its own predicted state, X̂t+1 = s(X̂≤t).
Simulators are often comprised of a PREDICTOR mechanism which maps the context X≤t to an update value Ŷ , that represents information about the system’s temporal evolution at the current time. Then Ŷ is used by an UPDATER mechanism to update the current state to the next state: X̂t+1 = UPDATER(X≤t, Ŷ ), e.g., updating current positions and velocities represented by Xt with new velocities and accelerations represented by Ŷ , to predict the next state.
Explicit simulators Across science, engineering, and graphics, a popular class of simulators are defined explicitly: the state update Ŷ is predicted directly fromX≤t using an explicit forward function, Ŷ = fD(X≤t), as illustrated in Figure 1b. Among the rapidly growing family of learned simulators, the forward function fD is typically implemented using a neural network (Sanchez-Gonzalez et al., 2020; Pfaff et al., 2021).
Constraint-based implicit simulators Here we explore learned simulators based on implicit formulations of the dynamics. Rather than predicting the desired state directly, as in explicit formulations, our implicit simulator uses a differentiable constraint function, c = fC(X≤t, Ŷ ), where c is a scalar that quantifies how well a proposed state update Ŷ agrees withX≤t. A future prediction is generated by applying a solver, such as an optimization or zero-finding algorithm, to find a Ŷ that satisfies the constraint function, and applying the UPDATER to update Xt to X̂t+1. The fC can represent all the physical constraints in the system, including the time dynamics.
1Despite that physics is Markovian, we use X≤t as input because our framework can also apply to dynamic processes which are non-Markovian. Providing previous states can also often be helpful when there are hidden properties of the system which are only identifiable over a sequence of observed states, and when a state does not represent velocity or momentum information.
As illustrated in Figure 1d, we formulate our constraint-solving procedure via an iterative method that starts with an initial proposal, Y (0). On the i-th iteration, the solver uses the gradient of fC w.r.t. Y at the current proposal to compute a change to the proposal, δY = −λ ∇Y fC(X≤t, Y )|Y=Y (i) . This δY is then used to revise the proposal to, Y (i+1) = Y (i) + δY . This process repeats for N steps, and the final proposal value is treated as the PREDICTOR’s output, Ŷ = Y (N).
Our constraint-based model’s fC is defined as a trainable function approximator which is real-valued and lower bounded at zero, and uses gradient descent to find Ŷ that minimizes it, where λ is a fixed step size. This induces the semantics that the desired Ŷ = arg minY fC(X≤t, Y ).
We also explore a second constraint-solving procedure, inspired by Yang et al. (2020)’s Neural Projections’ use of “fast projection” Goldenthal et al. (2007). Specifically, λ = − fC(X≤t,Y (i))
‖∇Y fC(X≤t,Y )| Y =Y (i)
‖2 . Unlike gradient descent, fast projection is a zero-finding algorithm, so in
this case fC is not lower bounded. This induces the semantics that fC(X≤t, Ŷ ) = 0.
This general formulation of constraint-based learned simulation can be trained by backpropagating loss gradients through the solver loop2. The computational budget of the forward pass can be varied via the number of solver iterations N .
Explicit iterative simulators As a hybrid between forward and constraint-based simulators, we introduced a model which iteratively refines a proposed state update, like in the constraint-based approach described above, but using an explicit function to directly output a δY at each iteration, rather than solving a constraint function (see Figure 1c). See Section 4.3 for details.
4 EXPERIMENTS
4.1 EXPERIMENTAL TASK DOMAINS
We test our framework on a variety of physical environments, shown in Figure 2: ROPE, BOUNCING BALLS and BOUNCING RIGIDS, whose ground truth training and test data were generated by the MuJoCo physics simulator, as well as BOXBATH from Li et al. (2019). These environments demonstrate a diverse set of physical constraints: ‘hard’ constraints (preserving the shape of the rigid object and resolving collisions), and ‘soft’ constraints on fluid movement, handling gravity and preserving the momentum of the rope and bouncing balls. See the Supplementary Materials for details.
4.2 MODEL IMPLEMENTATIONS
Representing the physical system Our experimental domains are physical systems comprised of sets of interacting point-like elements, e.g., objects, particles, mesh vertices, etc. We represent the state as Xt = (p j t )
j=1...|Xt|, where |Xt| is the number of elements, and pjt is the j-th element’s position at time t. There are also other static properties of the physical elements, e.g., masses, material types, etc., which we represent with Z to keep it distinct from the dynamic state information represented by Xt. The input context is X≤t = (Z,Xt−3, Xt−2, Xt−1, Xt).
2Implicit differentiation at the solution point should be applicable as well, and potentially offer computational benefits as mentioned in the Section 2, though we do not explore that here.
In our implementation, Ŷ , represents the predicted changes in position (i.e., the “average velocity” across the time step)3, ŷj = ∆p̂jt+1 = p̂ j t+1 − pjt . The UPDATER then computes X̂t+1 using p̂jt+1 = p j t + ∆p̂ j t+1, where p j t is provided in the input X≤t.
Constructing the input graph Our implementations of the fD, fDI, and fC use GNNs as the function approximators, so we need to pack the context, X≤t, and (for the fDI and fC) the proposed state update information, Y (i), into an input graph, Gt = (Vt, Et). The edges Et represent possible interactions among the elements, such as fully connected edges to represent collisions and rigid attachments in BOUNCING BALLS and BOUNCING RIGIDS, spring constraints in ROPE, and interactions among particles within a fixed connectivity radius in BOXBATH.
We enforced translation-invariance by construction, by never providing absolute positions as input to the models. Instead, the j-th input node’s features are the static properties, and a sequence of the three most recent position changes (i.e. average velocities), vjt = [z j ,∆pjt−2,∆p j t−1,∆p j t ], where, ∆pjt = p j t − pjt−1. For fDI and fC, which also take the solver’s current proposed Y (i), we also concatenate the proposed average velocity from the i-th solver iteration, yj,(i) − pjt , as input. For the input edge feature for an edge that connects from node j to k, we also provide the relative displacement vector between the nodes’ positions, ejkt = p k t − pjt .
GNN-based Encode-Process-Decode core We implemented fD, fDI, and fC using Graph Networks (GN) (Battaglia et al., 2018), arranged in the Encode-Process-Decode architecture, similar to previous work on GN-based learned simulators (Sanchez-Gonzalez et al., 2018; 2020; Pfaff et al., 2021). The Encoder uses two MLPs to encode node and edge features into high-dimensional latent vectors. The Processor applies multiple GNs, with unshared weights, in sequence, with node and edge residual connections at each step. We do not use global updates for the GNs. The Decoder uses an MLP to produce an output for each node.
The fD directly returns Ŷ . The fDI returns a change to the proposed update δY for the current iteration. The fC’s Decoder returns a scalar for each node to produce a constraint value per node {cj |j = 1 . . . |V |}. These node-wise constraint values are averaged to compute a single scalar c constraint for the entire system, c = fC(X≤t, Ŷ ) = 1|V | ∑|V | j=1 c j .
Solving the constraint For fDI and fC we initialize Y (0) = ∆pjt to the most recent average velocity4. We used auto-differentiation in JAX to compute the gradient function,∇Y fC, and the step size λwas specific to the model variant, as described below. During training we used N = 5 solver iterations.
4.3 MODEL VARIANTS
The key questions in this work are whether constraint-based learned simulators can compete with explicit, forward learned simulators, whether implementing the constraint function with GNNs is more effective than with MLPs, and how minima-based constraint functions solved by gradient descent compare to constraints defined as the zeros of a function which are solved by fast projection (Goldenthal et al., 2007). The following model variants allow us to answer these questions.
Forward GNN This is an explicit, forward GNN-based learned simulator based on the GNS models from Sanchez-Gonzalez et al. (2020); Pfaff et al. (2021). It directly predicts the state update Ŷ from the past time points X≤t.
C-GNS Gradient Descent (C-GNS-GD) and C-GNS-Fast Projections (C-GNS-FP) These are our proposed constraint-based GNN models. For the C-GNS-GD, the scalar per-node output cj was squared, to force the overall fC to be non-negative, and a gradient descent solver with a fixed step size, λ = 0.001, was used to minimize it. For C-GNS-FP, the λ was based on “fast projection” (Goldenthal et al., 2007; Yang et al., 2020), as described in Section 3. Supplementary Figure B.5(c-d) shows ablations.
3For BOXBATH we vary a number of modelling choices to best match those in Sanchez-Gonzalez et al. (2020). The major difference is that we set Ŷ to be the average acceleration rather than average velocity. See Supplementary Materials for other differences.
4To ensure analogous information is provided downstream of fD, the update rule also includes the previous average velocity: p̂jt+1 = p j t + ∆p j t + ŷ j
Iterative GNN We implemented a hybrid between the Forward GNN and C-GNS, as shown in Figure 1c. It was identical to the C-GNS models, except its fDI directly predicted proposed state updates as in fD, rather than being computed via the gradients as was done with fC.
ConstraintMLP Gradient Descent (ConstraintMLP-GD) and ConstraintMLP-Fast Projections (ConstraintMLP-FP) These were MLP-based constraint models, which, rather than using GNNs to implement fC, instead concatenated the embeddings of all the input nodes into a single vector and passed them to an MLP implementation of fC. By default, these models cannot handle variable-length inputs, so we padded smaller states with zeros up to the maximum state size. The ConstraintMLP-FP was the MLP analog to our C-GNS-FP, and was similar to Neural Projections (Yang et al., 2020). The ConstraintMLP-GD used gradient descent, and was the MLP analog to our C-GNS-GD. We omit the results for the ConstraintMLP models on BOXBATH (1024 nodes), as MLPs do not generally work well on physical systems with more than a few particles (Battaglia et al., 2016; Sanchez-Gonzalez et al., 2018).
4.4 TRAINING AND EVALUATION
We trained the models to make next-step predictions, by computing the L2 loss between the predicted X̂t+1 and the corresponding ground truth Xt+1, averaged over nodes. All model weights and biases were trained using standard backpropagation with the Adam optimizer.
At test time, we compute 1-step metrics by evaluating the 1-step errors along each point of the ground truth trajectory. We also evaluate rollout errors by iteratively applying the learned model starting from an initial state, over 160 rollout steps, and computing the error between the predicted and ground truth trajectories.
5 RESULTS
Predictive accuracy5 Our experimental results show that our C-GNS-GD’s performance was generally better than the other model variants. Figure 3 compares the different models on 1-step and rollout position MSE (see Supplementary Table B.1 for numerical results). For each dataset, we
5Videos of the model rollouts are available at sites.google.com/view/constraint-based-simulator
used the same number of message-passing steps (MP) for all GN-based models. We used 2 MPs for the ROPE dataset, and 1 MP for all other tasks.
The C-GNS-GD has lower 1-step MSE between the ground truth and predicted positions than other models across all datasets. Qualitatively, we observed that for Forward GNN with a single messagepassing step, the box in BOXBATH “melts” over time, as the forward model cannot preserve its rigid shape (see Videos). The comparable C-GNS-GD, by contrast, maintains the rigidity more effectively. These quantitative results suggest that constraint-based learned simulators are competitive alternative to explicit, forward learned simulators. We generally found that the Iterative GNN was fairly competitive with the C-GNS-GD in overall performance and better than the Forward GNN.
We also found that the C-GNS-FP was generally less stable across seeds, and not as accurate as the C-GNS-GD. The same conclusion holds for ConstraintMLP-FP versus ConstraintMLP-GD. We speculate that the fast projection algorithm may make training challenging because the step size λ is proportional to fC, which may cause poor zero-finding early in training when the fC is not yet informative. Additionally, we find that C-GNS-FP algorithm becomes unstable in the areas with shallow constraint gradients, perhaps because its λ depends on the inverse of the gradient’s norm.
We explored how varying the message-passing steps and solver iterations (N ) influenced the relative performance among the models in our ROPE dataset. Figure 4 shows that the C-GNS-GD generally required fewer parameters and message-passing steps to achieve comparable 1-step MSE to the other models. Supplementary Figure B.3 shows similar results for the rollout MSE. For most combinations of message-passing steps and number of solver iterations, C-GNS-GD (green) outperforms the Iterative GNN (yellow), C-GNS-FP (purple) as well as the Forward GNN (blue) with the same number of MPs (the Forward GNN is not iterative model, so we plot it as a single bar). We hypothesize that the solver iterations in the C-GNS and Iterative GNN may play a similar role to message passing with shared weights .
Interpreting the learned constraints To better understand the learned fC functions in the C-GNSGD, Figure 5 visualizes the node-wise constraint values as a function of Y (proposed average velocity) for different nodes in the ROPE dataset while holding the other nodes’ proposed update Y fixed.
We also overlay the sequence of five points that represent the proposed Y (i) steps from the solver where all nodes were jointly optimized. The figure shows the learned fC has a minimum near the ground truth Y, which the gradient descent steps are able to reach.
Incorporating novel constraints at test time We next explored a unique advantage of the constraint-based model: because the fC measures the degree the physical constraints are violated, we can incorporate additional, hand-designed constraints at test time, and use the model to potentially satisfy them. For the ROPE dataset, we designed three constraint functions that return positive values which increase quadratically as the rope enters different “forbidden” regions of the space: a vertical wall, a horizontal floor, and a disk-shaped region. We weighted these constraint terms by a coefficient hyperparameter and added each of the hand-designed constraints to the learned fC term of C-GNS-GD and ran the forward evaluation of the model.
As shown in Figure 6, the model was able to simulate the dynamics in a way that the corresponding forbidden region was avoided. In some cases, satisfying the joint constraint resulted in unintuitive behaviors, such as the rope links changing in length to adapt to the obstacle (Videos). However, this is to be expected, as the minimum of the joint constraint may not overlap with the minimum of the learned constraint, which is the one that would otherwise guarantee length preservation. For this example we added a further hand-designed constraint that incentivizes maintaining relative distances between nodes. In general this is a powerful example of how constraint-based models can generalize outside their training data, and solve both for the learned dynamics and arbitrary desired constraints.
Generalizing to larger systems via increased solver iterations In principle, iterative and constraint-based simulators should find more accurate solutions by increasing the number of solver iterations, N . We investigated whether the C-GNS-GD and Iterative GNN trained on ROPE could generalize from Ntrain = 5 on which they were trained, to Ntest ∈ [0, 15]. We also analyzed whether increased solver iterations could improve generalize performance from training on ropes with 5−10 nodes, to test ropes with 20 nodes.
Figure 7a (top row) shows that for test ropes that match the 5−10 nodes experienced during training, the Iterative GNN (light blue) overfits very heavily to Ntest = Ntrain = 5: error increases abruptly for N ≤ 4 and N ≥ 6. By contrast, the C-GNS-GD (light red) generalizes much better to different Ntest. Figure 7a (bottom row) shows that for test ropes with 20 nodes, the Iterative GNN again overfits, while the C-GNS-GD can generalize well to longer ropes if Ntest is increased.
We also trained the Iterative GNN and C-GNS-GD with additional loss terms that were applied to the Y (i) on each solver iteration, not only the final one, Ŷ = Y (N). We used an exponential decay factor, α = 0.25, which downweighted this additional loss term more heavily for earlier solver proposals. The dark blue and red curves in Figure 7a show how this additional loss further improves generalization to more solver iterations and larger systems as test time for the Iterative GNN, but especially the C-GNS-GD. Figure 7b visualizes how increasing the solver iterations systematically improves the quality of the long-term rollout accuracy in the ROPE dataset.
Together these results show the C-GNS-GD is effective in making use of additional resources at test time. This opens the exciting possibility of training on small, simple systems, and testing on large, complex systems. See Supplementary Figure B.2 for further details.
6 DISCUSSION
We presented a general-purpose framework for constraint-based learned simulation, where a learned constraint function implicitly represents the dynamics, and future predictions are generated via a constraint solver. We implemented our framework using GNNs as the constraint function and gradient descent as the constraint solver, and tested it in a variety of challenging physical simulation problems. Our results showed that our C-GNS has competitive or better performance compared to previous learned simulators. We demonstrated unique abilities to generalize to novel, hand-designed constraints, and use more solver iterations than experienced during training to improve the accuracy on larger systems.
We can hypothesize about the relationship between explicit, forward learned simulators and implicit, constraint-based ones in terms of the sharing schemes of these architectures. The C-GNS has a stronger inductive bias than the Forward GNN. The transformation of fC in C-GNS effectively ties the parameters in the resulting ∇Y fC function, and the solver iterations are analogous to how a recurrent neural network’s parameters are shared over iterations. In contrast, the message-passing steps in the Forward GNN used in our work are unshared. In principle, the fD of the Forward GNN is more expressive because if given enough depth, after training it could learn to take parameter values that are equivalent to the shared parameters of C-GNS. Our results shown in Figure 4 supports this possibility: the Forward GNN with many more message-passing steps eventually approaches the C-GNS’s performance. Moreover, we speculate the C-GNS’s inductive biases contribute to its advantages in terms of incorporating novel hand-designed constraints and generalizing to more solver iterations and larger systems.
More broadly, the performance, generality and unique advantages of constraint-based learned simulation make it an important new direction in the advancement of machine learning methods for complex simulation problems in science and engineering.
7 REPRODUCIBILITY STATEMENT
We are committed to open-source the model code after the paper is accepted. Also, we are going to open-source the MuJoCo datasets that we generated for this paper. We provide more details on the model implementation as well as the hyperparameters used for each model in the Supplementary Material. | 1. What is the main contribution of the paper regarding neural-network-based simulation?
2. How does the proposed simulator encode constraint-based simulation using graph networks?
3. Why are the notations X and Y with and without hats used in the paper, and how do they relate to each other?
4. Can you explain the role of explicit iterative simulators in comparison to the proposed approach?
5. How does the proposed approach ensure translation-invariance and rotation-invariance in the network design?
6. Is the proposed approach capable of dealing with both equality and inequality constraints? If so, how?
7. How would one determine proper hyperparameters N and lambda for a new environment?
8. Does the paper provide sufficient discussion on the generalizability of the learned network on the static properties of the environment? | Summary Of The Paper
Review | Summary Of The Paper
This paper presents a neural-network simulator that learns to solve constraints inspired by classic physics-based simulation. The proposed simulator uses a graph neural network to encode a constraint solver and shows results in a number of simulation environments.
The main contribution of the paper is its idea of using graph networks to encode constraint-based simulation.
Review
I find the notations X and Y with and without hats very confusing from the beginning. It would be great if the paper could clarify the semantic meaning of these notations.
Fig. 1 (a): its caption says the UPDATER uses
Y
^
to update
X
t
, but the figure shows an UPDATER using
Y
to update
X
^
t
. The notations in Fig. 1 (a) itself is extremely confusing: it starts with
X
1
(without a hat), proceeds to
X
^
t
(with a hat), but then uses
X
≤
t
(without a hat), and finally predicts
X
^
t
+
1
(with a hat). How did you determine when to use a hat and when not?
When defining the rollout,
X
with and without hats are mixed. From the rollout’s definition, it starts with
X
t
but is followed by
X
^
t
+
1
,
X
^
t
+
2
, …. . Does this mean you manually choose t and separate the whole rollout into two parts: before
t
every
X
is without a hat and after
t
every
X
is with a hat (generated by the simulator)?
I do not see a strong reason for introducing the explicit iterative simulator for comparison. It looks like
f
D
I
can be rewritten as
f
D
plus a few linear operators:
f
D
I
(
X
≤
t
)
=
(
f
D
(
X
≤
t
)
−
X
t
)
/
N
. Therefore, I would expect
f
D
and
f
D
I
to have very similar capabilities. What insight would we expect to get from comparing them?
It looks like the proposed constraint-based predictor runs a fixed number of gradient-based iterations and a fixed step size without guaranteeing the constraint is satisfied eventually. Therefore, I feel it is a bit too much to claim that the proposed approach is doing constraint-based simulation and to claim that the proposed network is a constraint solver. I think it would be necessary to either tone down this claim in the title, abstract, introduction, etc., or state explicitly at the beginning of the paper that the proposed approach does not guarantee the constraints are always satisfied.
The network design ensures translation-invariance by taking as inputs position offsets instead of absolute positions. Does the network also ensure rotation-invariance?
“These node-wise constraint values are averaged to compute a single scalar c constraint for the entire system”. This seems questionable to me if the goal is to solve constraints
f
C
(
X
≤
t
,
Y
^
)
=
0
, in which case I would expect that using an average of
|
c
|
or
c
2
makes more sense.
Looking at the experimental task domains, I feel there are two types of constraints involved in the simulator: equality constraints (e.g., end points of the consecutive rope segments must share the same location) and inequality constraints (e.g., bouncing balls must have nonnegative distances to the boundary). Does this paper deal with both equality and inequality constraints?
Both N and lambda seem to be crucial hyperparameters that need to be chosen for different environments individually. If this approach is applied to a new environment, how would you determine a proper N and lambda?
“In principle, iterative and constraint-based simulators should find more accurate solutions by increasing the number of solver iterations, N.” I am not sure I fully agree with this claim because the iterative solver is not equipped with a line search algorithm that adaptively changes the step size.
The large rollout MSE (orders of magnitude larger than #timesteps x One-step MSE) in Fig. 3 bottom seems to imply the learned simulator is not a good replacement of a numeric simulator because it accumulates errors from all time steps. I understand this may be a common issue that many other neural-network-based simulators also suffer from (all baselines in Fig. 3 have accumulated substantial errors and produced large rollout MSE), but I am still wondering whether the authors could give people a strong reason why it is useful to develop such a neural-network-based simulator if it is not accurate.
Similarly, I wonder if this paper could provide more discussions on the generalizability of the learned network on the static properties of the environment (the Z vector in the main paper), e.g., density, material types, time step size, etc. My understanding is that the network needs to be retrained if Z is updated, or the training set needs to be augmented to see various Z values. This does not seem to be very ideal for a simulator. Again, I understand this may be a common problem for many neural-network-based simulation papers, so I won’t hold it against this paper too much. |
ICLR | Title
Rethinking Content and Style: Exploring Bias for Unsupervised Disentanglement
Abstract
Content and style (C-S) disentanglement intends to decompose the underlying explanatory factors of objects into two independent latent spaces. Aiming for unsupervised disentanglement, we introduce an inductive bias to our formulation by assigning different and independent roles to content and style when approximating the real data distributions. The content embeddings of individual images are forced to share a common distribution. The style embeddings encoding instancespecific features are used to customize the shared distribution. The experiments on several popular datasets demonstrate that our method achieves the state-of-theart disentanglement compared to other unsupervised approaches and comparable or even better results than supervised methods. Furthermore, as a new application of C-S disentanglement, we propose to generate multi-view images from a single view image for 3D reconstruction.
N/A
Content and style (C-S) disentanglement intends to decompose the underlying explanatory factors of objects into two independent latent spaces. Aiming for unsupervised disentanglement, we introduce an inductive bias to our formulation by assigning different and independent roles to content and style when approximating the real data distributions. The content embeddings of individual images are forced to share a common distribution. The style embeddings encoding instancespecific features are used to customize the shared distribution. The experiments on several popular datasets demonstrate that our method achieves the state-of-theart disentanglement compared to other unsupervised approaches and comparable or even better results than supervised methods. Furthermore, as a new application of C-S disentanglement, we propose to generate multi-view images from a single view image for 3D reconstruction.
1 INTRODUCTION
The disentanglement task aims to recover the underlying explanatory factors of natural images into different dimensions of latent space, and provide an informative representation for tasks like image translation (Wu et al., 2019b; Kotovenko et al., 2019), domain adaptation (Li et al., 2019; Zou et al., 2020) and geometric attributes extraction (Wu et al., 2019c; Xing et al., 2019), etc.
The previous methods (Kim & Mnih, 2018; Higgins et al., 2017; Burgess et al., 2018a; Kumar et al., 2017) learn disentangled factors by optimizing the total correlation in an unsupervised manner. However, Locatello et al. (2019) prove that unsupervised disentanglement is fundamentally impossible without inductive bias on both model and data.
In this paper, we focus on content and style (C-S) disentanglement, where content and style represent two separate groups of factors. The main novelty of our work is that we assign different roles to the content and style in modeling the image distribution instead of treating the factors equally, which is the inductive bias introduced in our method. Most of the previous C-S disentanglement works (Denton & Birodkar, 2017; Jha et al., 2018; Bouchacourt et al., 2018; Gabbay & Hoshen, 2020) rely on supervision, which is hard to obtain for real data. E.g., Gabbay & Hoshen (2020) leverage group observation to achieve disentanglement by forcing images from the same group to share a common embedding. To our best knowledge, the only exception is Wu et al. (2019c). However, this method forces the content path to learn geometric structure limited by 2D landmarks.
Our definition of content and style is similar to Gabbay & Hoshen (2020), where the content includes the information which can be transferred among groups and style is image-specific information. When group observation is not available, we define content includes the factors shared across the whole dataset, such as pose. Take the human face dataset CelebA (Liu et al., 2015) as an example, the content encodes pose, and style encodes identity, and multi-views of the same identity have the same style embeddings, but different content embeddings, i.e., poses.
Based on the above definitions, we propose a new problem formulation and network architecture by introducing an inductive bias: assigning different and independent roles to content and style when approximating the real data distributions. Specifically, as shown in Figure 1, we force the content embeddings of individual images to share a common distribution, and the style embeddings are used to scale and shift the common distribution to match target image distribution via a generator.
We follow Bojanowski et al. (2018) and Gabbay & Hoshen (2020) to apply latent optimization to optimize the embeddings and the parameters of the generator. We also propose to use instance discrimination as a complementary constraint to assist the disentanglement. Please note that we only use the image reconstruction loss as the supervision; no extra labeling is needed. As the content and style perform a different and independent role when modeling the data, they are disentangled to encode the shared and instance-specific features respectively after the optimization.
The contributions of our work are as follows: we achieve unsupervised C-S disentanglement by introducing an inductive bias in our formulation: assign different and independent roles to content and style when modeling the real data distributions. Furthermore, we achieve better C-S disentanglement by leveraging instance discrimination. The experiments on several popular datasets demonstrate that our method achieves the state-of-the-art unsupervised C-S disentanglement and comparable or even better results than supervised methods. Besides, we propose to apply C-S disengagement to a new task: single view 3D reconstruction.
2 RELATED WORK
Unsupervised Disentanglement. A disentangled representation can be defined as one where individual latent units are sensitive to changes in individual generative factors. There have been a lot of studies on unsupervised disentangled representation learning (Higgins et al., 2017; Burgess et al., 2018a; Kumar et al., 2017; Kim & Mnih, 2018; Chen et al., 2018). These models learn disentangled factors by factorizing aggregated posterior. They can also be used for C-S disentanglement. The learned factors can be divided into two categories; one is content-related, the other is style-related. However, Locatello et al. (2019) proved that unsupervised disentanglement is impossible without introducing inductive bias on both models and data. Therefore, these models are currently unable to obtain a promising disentangled representation. Motivated by Locatello et al. (2019), we revisit and formulate the unsupervised C-S disentanglement problem to introduce inductive bias.
C-S Disentanglement. Originated from style transfer, most of the prior works on C-S disentanglement divide latent variables into two spaces relying on supervision. To achieve disentanglement, Mathieu et al. (2016) and Szabó et al. (2018) combine the adversarial constraint and auto-encoders. Meanwhile, VAE (Kingma & Welling, 2014) is used with non-adversarial constraints, such as cycle consistency (Jha et al., 2018) and evidence accumulation (Bouchacourt et al., 2018). Furthermore, latent optimization is shown to be superior to amortized inference (Gabbay & Hoshen, 2020). Unlike the above works, Wu et al. (2019c) propose a variational U-Net with structure learning for disentanglement in an unsupervised manner. However, this method is limited by the learning of 2D landmarks. In our paper, we formulate C-S disentanglement and explore inductive bias for unsupervised disentanglement. Note that style transfer aims at modifying the domain style of an image while preserving its content, and its formulation focuses on the relation between domains (Huang et al., 2018a). Our formulation is defined in a single domain but can be extended to cross-domain, as presented in Appendix G.
3 EXPLORING INDUCTIVE BIAS FOR C-S DISENTANGLEMENT
In this section, We first formulate the C-S disentangle problem by exploring the inductive bias and propose a C-S fusion block based on our formulation. We then perform the ablation study to demonstrate how the C-S get disentangled. Finally, our loss functions are presented.
3.1 PROBLEM FORMULATION
We parametric the target distributionPi(x|c) as P̂θ,si(x|c), where θ is the parameter of the generator Gθ that maps embeddings to images, si is the style embedding assigned to Ii. In our formulation of P̂ , we assign independent roles to content and style embeddings. {ci}Ni=1 are sampled from the distribution of the shared conditional variable c, which is denoted as Ψ. {si}Ni=1 are the parameters to characterize P̂ . Thus our inductive bias is introduced into our formulation. Ψ should be close to the ground truth distribution of the dataset, e.g., Gaussian distribution, Uniform distribution, etc.
Our optimization target is to maximize the log-likelihood of P̂ , and force c to follow the shared distribution Ψ meanwhile:
max θ,ci,si N∑ i=1 EIi∼Pi log P̂θ,si(x = Ii|c = ci),
s.t. KL(p(c)||Ψ) ≤ 0,
(1)
where c = ci indicates ci is the embedding assigned to Ii, and p(c) denotes the distribution of {ci}Ni=1. To solve this problem, we introduce the Lagrange Multiplier as
min θ,ci,si − N∑ i=1 EIi∼Pi log P̂θ,si(x = Ii|c = ci) + λKL(p(c)||Ψ). (2)
3.2 PROPOSED NETWORK ARCHITECTURE
Here we propose a network architecture to address the formulated problem in Section. 3.1. In particular, we design a C-S fusion block to assign different roles to content and style in modeling real data distribution. As shown in Figure 1, we add this block before the generator to force the input to follow the customized distribution.
Inspired by the observation that mean and variance of features carry the style information (Gatys et al., 2016; Li & Wand, 2016; Li et al., 2017; Huang & Belongie, 2017), we use the style embeddings to provide the statistics to scale and shift the shared distribution Ψ to match the target distribution as zi = fσ(si) · ci + fµ(si), (3) where fσ and fµ are two fully connected layers predicting the mean and variance respectively. With this design, Eq. 1 is equivalent to minimize
min θ,ci,si N∑ i=1 ‖Ii −Gθ(zi)‖+ λKL(p(c)||Ψ). (4)
Please refer to Appendix G for the proof. To solve Eq. 4, for the reconstruction term, we adopt latent optimization to optimize θ, {ci}Ni=1, {si}Ni=1. For the KL term, it can not be optimised directly. However, when Ψ has some specific forms, we can adopt a normalization to force each of the content embeddings ci to follow it approximately.
We can choose different forms of Ψ to fit the ground truth distribution when solving the optimization problem, here we provide two examples:
Gaussian Distribution. Gaussian distribution is used to model the distribution of images in many works (Kingma & Welling, 2014; Kim & Mnih, 2018; Higgins et al., 2017). When setting the shared distribution Ψ as a zero mean, unit variance Gaussian distributionN (0, I), we can use instance normalization (IN) to force each of the content embeddings ci to follow N (0, I) in the optimization
process. By combining IN and Eq. 3, we get the same formulation as AdaIN (Huang & Belongie, 2017), which is widely adopted in style transfer tasks (Huang et al., 2018a; Kotovenko et al., 2019). Normalizing the feature map of the network to Gaussian is helpful for network training (Ioffe & Szegedy, 2015; Wu & He, 2018), but our motivation for using normalization is to force the embeddings to share the same Gaussian distribution, which differs from these works.
Uniform Distribution. In many datasets, the distribution of content is close to a uniform distribution, e.g., in the Chairs (Aubry et al., 2014) dataset, the images are synthesized from dense views surrounding the objects. For these datasets, we set Ψ to be uniform distribution, and normalize the content embeddings with L2 normalization to force each of them to follow uniform distribution approximately (Muller, 1959).
As shown in Figure 1, we can use this C-S fusion block only before the generator, denoted as the Single C-S Fusion framework. We can also provide multiple paths to implement our design by inserting it before every layer of the generator, denoted as a Multiple C-S Fusion framework. For details of the network structures, please refer to Appendix A.2.
3.3 DEMYSTIFYING C-S DISENTANGLEMENT
In this subsection, we perform some experiments to verify that assigning different and independent roles to content and style for modeling real data distribution is the key to C-S disentanglement. The experimental setting can be found in Section 4.
If we do not assign different roles, i.e., concatenating content and style embedding as the input of the generator, the network can hardly disentangle any meaningful information for the CelebA dataset, as shown in Figure 2 (a). Our Single C-S Fusion framework can disentangle the pose and identity of human faces, as shown in Figure 2 (c). The content plays the role of modeling the shared distribution. When the shared distribution constraint is removed, i.e., without normalization, the result is shown in Figure 2 (b), where the pose and identity can not be disentangled. For the Multiple C-S Fusion framework, multiple paths are provided, and the network has more flexibility to approximate the target distribution and outperforms the Single C-S Fusion framework, as shown in Figure 2 (d).
Since the shared distribution is crucial, we experiment to demonstrate that better disentanglement can be achieved by choosing a better distribution to fit the dataset. For the real-world dataset CelebA, the distribution of pose is better modeled as a Gaussian distribution. As Figure 4 (a) and (b) show, IN achieves better disentanglement than L2. For the synthetic Chairs (Aubry et al., 2014) dataset, the distribution of pose is close to uniform distribution rather than Gaussian distribution. Figure 4 (c) and (d) show that the L2 normalization results in better identity and pose consistency.
To better understand how our design helps to guide the disentanglement, we visualize the generated images during the training process in Figure 3. As the generated images show, a mean shape of faces is first learned. Then the faces start to rotate, which indicates the pose is disentangled to the content space. After that, the identity features emerge as the style starts to learn parameters for customizing the shared distribution to approximate the real faces distribution.
3.4 LOSS FUNCTION
Perceptual Loss. Perceptual Loss is widely used in weakly supervised and unsupervised methods (Wu et al., 2020; Ren et al., 2020; Wu et al., 2019c). Gabbay & Hoshen (2020) claimed that perceptual loss is not extra supervision for disentanglement. We adopt a VGG (Simonyan & Zisserman, 2015) perceptual loss LP as a reconstruction loss in Eq. 4, implemented by Hoshen et al. (2019).
Instance Discrimination. Instance discrimination can automatically discover appearance similarity among semantic categories (Wu et al., 2018). Inspired by this, we propose to use instance discrimination as a complementary constraint to enhance consistency among the images sharing the same style embeddings. We denote the instance discrimination loss as LID. The implementation detail can be found in Appendix C.3.
Information Bottleneck. Burgess et al. (2018a) propose improving the disentanglement in β-VAE by controlling the capacity increment, i.e., forcing the KL divergence to be a controllable value. This motivated us to control the information bottleneck capacity of content and style to help to avoid leakage. This loss is denoted as LIB. The details of this loss are provided in Appendix C.4. Our full objective is
wPLP + wIBLIB + wIDLID, (5) where hyperparameters wP , wIB , and wID represent the weights for each loss term respectively. The ablation study for the loss terms is presented in Appendix E.
4 EXPERIMENTS
In this section, we perform quantitative and qualitative experiments to evaluate our method on seen data following common practice. We test our method on several datasets: Car3D (Reed et al., 2015), Chairs (Aubry et al., 2014), CelebA (Liu et al., 2015). For details of the datasets, please refer to Appendix B.
Baselines. Among all the prior works, we choose several state-of-the-art class-supervised C-S disentanglement benchmarks for comparisons: Cycle-VAE (Jha et al., 2018), a variant of VAE using cycle-consistency; DrNet (Denton & Birodkar, 2017), an adversarial approach; Lord (Gabbay & Hoshen, 2020), a latent optimization method. We also choose two unsupervised disentanglement methods: FactorVAE (Kim & Mnih, 2018), a method that encourages the distribution of representations to be factorial; Wu et al. (2019c) 1, a two-branch VAE framework based on unsupervised structure learning. More details for baselines are presented in Appendix B.
1There is no open-sourced implementation for it. We modify https://github.com/CompVis/ vunet and provide pseudo ground truth landmarks to the network. Thus it becomes semi-supervised.
4.1 QUANTITATIVE EXPERIMENTS
We compare our method (Multiple C-S Fusion framework) with the baselines on Car3D, Chairs and CelebA.
Content Transfer Metric. To evaluate our method’s disentanglement ability, we follow the protocol of Gabbay & Hoshen (2020) to measure the quality of content transfer by LPIPS (Zhang et al., 2018). Details are presented in Appendix A.1. The results are shown in Table 1. We achieve the best performance among the unsupervised methods, even though pseudo label is provided for Wu et al. (2019c). Furthermore, our method is comparable to or even better than the supervised ones.
Classification Metric. Classification accuracy is used to evaluate disentanglement in the literature (Denton & Birodkar, 2017; Jha et al., 2018; Gabbay & Hoshen, 2020). Following Jha et al. (2018), we train two models of a single fully-connected layer to classify content labels from style embeddings and classify style labels from content embeddings. Low classification accuracy indicates that the leakage between content and style is small. Due to no content annotations of CelebA, we regress the position of the facial landmarks from the style embeddings. The results are summarized in Table 2. Though without supervision, our method is comparable to several methods. We observe that the classification metric is also influenced by information capacity and dimensions of embeddings. For FactorVAE (Kim & Mnih, 2018), the poor reconstruction quality indicates that the latent embeddings encode a very small amount of information that can hardly be classified. The dimensions of the latent vectors of different methods vary from ten to hundreds. Actually, the higher dimension usually leads to easier classification. Based on the above observations, the classification metric may not be appropriate for disentanglement, which is also observed in Liu et al. (2020).
4.2 QUALITATIVE EXPERIMENTS
Disentanglement & alignment. In Figure 5 (a) and (b), we conduct linear interpolation to show the variation in the two latent manifolds. Horizontally, the identity is changed smoothly with the interpolated style latent space while maintaining the pose information. Vertically, the identity remains the same as the pose changes. These results illustrate the following points: 1) The learned content
and style spaces are continuous. 2) Columns of the left and right figures share the same pose, suggesting that the learned content spaces are aligned. 3) Style-related information is maintained when changing the content embedding and vice versa, suggesting the good disentanglement.
We perform retrieval on the content and style latent spaces, respectively. As shown in Figure 5 (c) and (d), the nearest neighbors in the content space share the same pose but have different identities, which reveals the alignment on content space. To better identify the faces, we let the nearest neighbors in the style space share the same pose, and the generated faces look very similar, revealing that the style is well maintained. As shown in Figure 5 (f), one interesting observation is that zero content embeddings lead to a canonical view. As we assume that the pose distribution of faces is N (0, I), the canonical view is the most common pose in the dataset and sampled from the peak of this distribution. We also show the faces with zero style embeddings in Figure 5 (e), and it looks like the mean face of the dataset.
Visual Analogy & Comparison. Visual analogy (Reed et al., 2015) is to switch between style and content embeddings for each pair within a testset. We show the visual analogy results of our method against FactorVAE (Kim & Mnih, 2018) (typical unsupervised baseline) and Lord (strongest supervised baseline) in Figure 8 on Chairs, Car3D, and CelebA. The results of FactorVAE on all datasets have poor reconstruction quality and bad content transfer. On Cars3D, Lord has artifacts (e.g., third column) and could not capture the color style of the test images (e.g., fourth row). On CelebA, the transfer result of Lord has ambiguity, e.g., the content embedding controls facial expression in the fifth column, while other content embeddings do not control expression. Our method achieves comparable pose transfer to Lord and maintains the identity of the images. For more results (including other datasets), please refer to Appendix D.
Comparsion with Image Translation. Starting with our assumption that content embeddings share the same distribution, and leveraging C-S fusion block, we achieve unsupervised content and style disentanglement without needing “swapping” operation and GAN loss constraint to extract the shared content information as image translation works (MUNIT (Huang et al., 2018b) and Park et al. (2020)) do. As shown in Figure 6, for MUNIT (Huang et al., 2018b) and Park et al. (2020), the
content is low-level structure information. While in our case, the content is a high-level semantic attribute of the object, e.g., the pose attribute. As shown in Figure 6 (d), we can also achieve the similar performance to exchange the tone of the images by exchanging the fine style. The fine styles in our method are the style inputs of the last C-S fusion block in the multiple C-S fusion framework.
4.3 UNSEEN IMAGES INFERENCE
Though we learn to disentangle in an unsupervised manner, we may need to process unseen images. An intuitive solution is to train encoders to encode images to the latent spaces. We train style encoder Es and content encoder Ec by minimizing
LE = ‖Es(Ii)− si‖1 + ‖Ec(Ii)− ci‖1. (6)
We apply our model trained on the CelebA dataset to faces collected by Wu et al. (2020) including paintings and cartoon drawings. As shown in Figure 7, our method can be well generalized to unseen images from different domains.
5 NEW APPLICATION
In this work, we explore a new application of C-S disentanglement. For 3D reconstruction, singleview settings lack reliable 3D constraints, which can cause unresolvable ambiguities (Wu et al., 2019a). Thanks to our disentangled representations, we can generate multi-view images from a single view by extracting the style embedding of the single view and then combining it with multiple content embeddings. On Chairs, we adopt Pix2Vox (Xie et al., 2019), a framework for single-view, and multi-view 3D reconstruction to verify the advantages of our method. As shown in Figure 9, the 3D objects reconstructed from multi-view inputs generated from our method are much better than those reconstructed from a single view, and even comparable to those reconstructed from groundtruth multi-view images. For results on Celeba, please refer to Appendix D.3.
6 CONCLUSION
We present an unsupervised C-S disentanglement method, based on an inductive bias: assigning different and independent roles to content and style when approximating the real data distributions. Our method outperforms other unsupervised approaches and achieves comparable to or even better performance than the state-of-the-art supervised methods. We also propose to use it to help single-view 3D reconstruction, as a new application of C-S disentanglement. As for the limitation, we fail on
datasets containing multiple categories with large appearance variation, e.g., CIFAR-10 (Krizhevsky et al., 2009), which does not match our assumption. Our method could be adopted to help downstream tasks, e.g., domain translation, ReID, etc. An interesting direction is to apply our method to instance discrimination. With disentangled representations, contrastive learning is expected to perform more effectively.
B BASELINE DETAILS
For the datasets in the main paper, Car3D contains 183 car models, each rendered from 96 poses. Chairs consists of 1393 chair models, each rendered from 62 poses. CelebA contains 202,599 facial images of 10,177 celebrities.
For the baselines, we use open-sourced implementations for Cycle-VAE (Jha et al., 2018) 2, DrNet (Denton & Birodkar, 2017) 3, Lord (Gabbay & Hoshen, 2020) 4 and FactorVAE (Kim & Mnih, 2018) 5.
For FactorVAE, we traverse the latent space to select the dimensions related to pose as content embedding and treat the other dimensions as style embedding. For Wu et al. (2019c), there is no opensourced implementation. We use the code from https://github.com/CompVis/vunet,
2https://github.com/ananyahjha93/cycle-consistent-vae 3https://github.com/ap229997/DRNET 4https://github.com/avivga/lord-pytorch 5https://github.com/1Konny/FactorVAE
which uses ground truth landmarks as input instead of learning the landmarks unsupervisedly. To achieve the pseudo ground truth landmarks, we use the face detection library (Bulat & Tzimiropoulos, 2017) for Celeba. We try to use the L1 and perceptual loss for all the baselines and select the best.
We split the datasets into training and testing sets. For Celeba, we randomly select 1000 among 10177 celebrities for testing. For Car3D, we randomly select 20 among 183 CAD models for testing. For Chairs, we randomly select 100 among 1393 models for testing. For baselines with group supervision, only the training sets are used for training. For unsupervised baselines and our method, all the datasets are used for training.
C TECHNICAL COMPONENTS
Here we present three technical components that are helpful to the C-S disentanglement. The ablation study for these components is shown in Appendix E.
C.1 LATENT OPTIMIZATION.
In the C-S disentanglement literature, it is common to use encoders to predict embeddings, while latent optimization (Bojanowski et al., 2018; Gabbay & Hoshen, 2020) directly optimizes the embeddings via back-propagating without using encoders. Encoders have a large number of parameters and require a lot more extra effort for training. Therefore, We adopt the latent optimization approach to update the latent spaces directly.
C.2 REPARAMETRIC MODULE
Inspired by VAE (Kingma & Welling, 2014), we design a reparametric module to force the latent space to be continuous. Thus, the embeddings encoding similar information will get closer in the latent space. Assume we have a mean embedding µ with a standard deviation σ, the reparametrized output is σX + µ, where X ∼ N (0, I) . To further simplify the problem, we set σ = 1 following Wu et al. (2019c) and Gabbay & Hoshen (2020). The mean embedding is the input style or content embedding. The reparametric module can make the latent space continuous, which is helpful for backpropagation. Though the training images have discrete identities, the optimized style embedding space is continuous. Different style embeddings of the people with similar appearances are close to each other, as shown in Figure 5 (c).
C.3 INSTANCE DISCRIMINATION LOSS
We first pretrain a ResNet-18 (He et al., 2016) Φ and define a collection of layers of Φ as {Φl}. Among several representative methods (Wu et al., 2018; Ye et al., 2019; He et al., 2020), we observe that the method in Wu et al. (2018) achieves the best performance in our task. Given two images Ii and Ij , we mix the embeddings to generate u = G(R(si), R(cj)) and v = G(R(sj), R(ci)). For samples sharing the same style embedding, we enforce the feature distance in Φ between them to be close. This loss term can be written as
LID = ∑ l λl(‖Φl(u)− Φl(x)‖1 + ‖Φl(v)− Φl(y)‖1), (7)
where x = G(R(si), R(ci)) and y = G(R(sj), R(cj)). The hyperparameters {λl} balance the contribution of each layer l to the loss. {λl} are set to be [1, 1, 1, 1, 1]
C.4 INFORMATION BOTTLENECK
Similar to Anneal VAE (Burgess et al., 2018a), we introduce a information bottleneck given by
LIB = γs‖s2 − Cs‖1 + γc‖c2 − Cc‖1 (8)
where Cs and Cc are the information capacity controlling the amount of information of the content and style respectively. During training,Cs andCc increase linearly. The rate of increase is controlled by the increase steps and the maximum value. By controlling the increase rate, the content is forced
to encode information first, so that the learning process is more consistent with our assumptions about the data: the shared conditional variable c is learned first.
For the information bottleneck, by taking the training process of the model without the information bottleneck as a reference, we determine the increase steps and the maximum of the information capacity Cc and Cs. We can enhance the model inductive bias by tuning these parameters. For Chairs, we set the maximum of Cc to 5, the start value of Cc to 2, the increase steps of Cc to 1.4× 105, γc to 1 and γs to 0. Note that our model achieves state-of-the-art performance on Chairs even without information bottleneck.
D MORE RESULTS
In this section, we demonstrate more qualitative comparison and more qualitative results (including more datasets).
D.1 MORE QUALITATIVE EXPERIMENTS
In the main paper, for unsupervised baselines, we only compare our method with FactorVAE (Kim & Mnih, 2018) limited to space. As shown in Figure 11, we also outperform Wu et al. (2019c). For Wu et al. (2019c), the disentanglement is poor, such that the content embeddings control almost all the factors while the style embeddings control the tone.
For datsets in the main paper, We provide more qualitative results in Fig. 24, 25, 26, 27 and 28. Moreover, we also apply our method on higher resolution images and achieve good performance, as shown in Figure 20.
D.2 MORE DATASETS
Besides the datasets introduced in the main paper, we make additional experiments on other datasets: such as MNIST (LeCun et al., 2010), Cat (Parkhi et al., 2012; Zhang et al., 2008), Anime (Chao, 2019) and Market-1501 (Zheng et al., 2015). MNIST has 70k examples for 10 handwritten digits. Cat has 1.2k cat head images. Anime contains 63,632 anime faces. Market-1501 have 25,259 images. The results are shown in Figure 21, 22 ,23. Furthermore, we show our results on the Market-1501 dataset in Figure 19, which demonstrates our method can disentangle the human pose and the appearance even though the skeletons have large variances.
D.3 MORE 3D RECONSTRUCTION
Our setting treats every image as a single identity (style) without ambiguity for augmenting singleview images. On Celeba, We use MVF-Net (Wu et al., 2019a) based on multi-view to reconstruct 3D facial shapes. For a given image, we can get the corresponding style embedding content embedding. Then we can get the front, left, and right view of this image combining the extracted style embedding and prepared content embeddings 6. As shown in Figure 13, our augmented multi-view images are consistent, and the 3D meshes based on our method are more accurate than those based on Lord.
E MORE ABLATION STUDY
Here we perform more ablation study for the technical modules.
If we use an amortized scheme instead of a latent optimization scheme, there are leaks between style and content latent space, and the result is worse than latent optimization, as shown in Figure 12 (a) and (c). Furthermore, if we do not use a reparametric module, we find the reconstruction performance is worse, as shown in Figure 12 (b). For the instance discrimination loss, the comparison is shown in Table 4. The disentanglement is better with an instance discrimination loss. For the information bottleneck, as shown in Table 3, the result with an information bottleneck is much better than the one without it.
F COMPARISON WITH SELECTED RELATED WORK
Comparison with StyleGAN. In our framework, the optimized content (conv) and style embeddings are disentangled representations of corresponding images. While StyleGAN (Karras et al., 2019) keeps the input of the convolution branch as a learnt constant for the whole dataset and finds the feature space of the “style” branch has disentanglement ability. For StyleGAN2 (Karras et al., 2020) 7, we select the subset of “style”, which represents pose, as the content embedding and the rest subset as the style embedding. As shown in Figure 15, StyleGAN2 entangled pose with other semantic attributes, such as hair and glasses. As shown in Figure 28, the content of our method on human faces is pose attribute without entanglement.
Comparsion with MUNIT & Park et al. (2020). Starting with our assumption that content embeddings share the same distribution, and leveraging AdaIN-like operation, we achieve unsupervised content and style disentanglement without needing “swapping” operation and GAN loss constraint to extract the shared content information as image translation works (MUNIT (Huang et al., 2018b) and Park et al. (2020)) do. As shown in Figure 14, for MUNIT (Huang et al., 2018b) and Park et al. (2020), the content is low-level structure information. While in our case, the content is a high-level semantic attribute of the object, e.g., the pose attribute. As shown in Figure 14 (d), we can also achieve the similar performance to exchange the tone of the images by exchanging the fine style.
7We use the implementation from https://github.com/rosinality/stylegan2-pytorch.
The fine styles in our method are the style inputs of the last C-S fusion block in the multiple C-S fusion framework.
G CROSS-DOMAIN APPLICATION
As shown in the main paper, the content and style are disentangled in a single domain. Based on our assumption, the cross-domain dataset also can be disentangled. In this section, we test our model on a cross-domain dataset to further verify our assumption. In some cases that we merge images from two domains, our method can still work and achieve performance, which is similar to domain translation. For example, Edges2Shoes (Yu & Grauman, 2014) is a dataset consisting of 50k paired shoe and edge map images. As shown in Figure 16, the content is edge structure, and the style is texture. Thanks to this, we can translate edge images into shoe images and vice versa without any additional operation.
Furthermore, once the domain labels are given, we can disentangle and align the cross-domain dataset. This experiment may be helpful for domain transfer and domain adaptation. We train our model on the dataset that consists of Celeba and Anime. The model needs to be modified for learning cross-domain data: concatenate the domain embedding and the style embedding, take it as the style embedding in the original model, and optimize the domain embedding during latent optimization. The results are shown in Figure 17. The learned poses are well aligned both in the animation and reality domain.
H PROOF
Our optimization target is to minimize the KL divergence between P and Q,
min θ,ci,si N∑ i=1 KL(Pi(x|c = ci)||Qθ,si(x|c = ci)). (9)
Expanding the above KL term, we have,
min θ,ci,si N∑ i=1 ∫ x Pi(x|c = ci) log Pi(x|c = ci) Qθ,si(x|c = ci) dx. (10)
The above integral equation cannot be directly calculated, but can be estimated by the sampled images {Ij} ∼ Pi,
min θ,ci,si N∑ i=1 ∑ Ij∼Pi Pi(x = Ij |c = ci) log Pi(x = Ij |c = ci) Qθ,si(x = Ij |c = ci) . (11)
Separating P and Q from the above equation by logarithmic transformation, we have
min θ,ci,si N∑ i=1 ∑ Ij∼Pi Pi(x = Ij |c = ci) logPi(x = Ij |c = ci)
− Pi(x = Ij |c = ci) logQθ,si(x = Ij |c = ci).
(12)
Since Pi(x = Ij |c = ci) is the dataset distribution, which is an unknown constant distribution, therefore, the first term is a constant, the optimization target is equivalent to
max θ,ci,si N∑ i=1 ∑ Ij∼Pi Pi(x = Ij |c = ci) logQθ,si(x = Ij |c = ci). (13)
Rewriting it into mathematical expectation form, we have
max θ,ci,si N∑ i=1 EIj∼Pi logQθ,si(x = Ij |c = ci), (14)
where Pi refers to Pi(x = Ij |c = ci). Our optimization target is equivalent to maximum likelihood estimation. Here we assume Q is a Gaussian distribution,
Qθ,si(x|c = ci) = 1√ 2πσ exp
( − 1
2σ2 ‖x−Gθ(si, ci)‖22
) . (15)
Combining Eq. 14 and Eq. 15, we have
max θ,ci,si N∑ i=1 ( − 1 2σ2 ‖Ii −Gθ(si, ci)‖22 ) . (16)
Consequently, the final optimization target is
min θ,ci,si N∑ i=1 ‖Ii −Gθ(zi)‖22. (17)
Q.E.D. | 1. What is the main contribution of the paper on content-style disentanglement?
2. What are the strengths of the proposed approach, particularly in the definition of content and style?
3. What are the weaknesses of the paper regarding its novelty and comparisons with other works?
4. Do you have any questions regarding the proposed method, such as the inclusion of facial expression in the content embedding?
5. How does the reviewer assess the clarity, quality, and reproducibility of the paper's content? | Review | Review
In this paper, authors introduce a new approach to content-style (C-S) disentanglement for multimodal unsupervised image-to-image translation. The main idea behind the proposed method is that the content information is encoded into the latent space common for both source and target domains, while the domain-specific style information is used to shift and scale the points from content distribution so they are mapped by the generator to the target domain distribution.
There are a few issues in this paper that I would like to be addressed.
In the first page of the paper you write: "When group observation is not available, we define content includes the factors shared across the whole dataset, such as pose. Take the human face dataset CelebA (Liu et al., 2015) as an example, the content encodes pose, and style encodes identity, and multi-views of the same identity have the same style embeddings, but different content embeddings, i.e., poses". In this paper, authors view only pose information of the faces in CelebA dataset as content; but according to the definition above, shouldn't facial expression also be included in the content as shared information across all examples?
The proposed method suggests using style embedding to shift the content embedding before translation in Single case and within translation in Multiple case. How to address the fact that the same idea was used in a few other papers for multimodal cross-domain translation, such as MUNIT by Huang et al. ECCV'2018 or FUNIT by Liu et al. ICCV'2019?
In addition to the main C-S fusion block, two more losses were introduced in this method: Instance Discrimination (ID) and Information Bottleneck (IB). To see the effect of each component, it would be helpful to see the ablation study results.
On the other hand, the paper is well-written and well-structured ans easy to follow; the translation results look promising. In addition, the quantitative metrics used in this paper, in particular the content transfer metric, are very reasonable for evaluation of disentanglement quality. |
ICLR | Title
Rethinking Content and Style: Exploring Bias for Unsupervised Disentanglement
Abstract
Content and style (C-S) disentanglement intends to decompose the underlying explanatory factors of objects into two independent latent spaces. Aiming for unsupervised disentanglement, we introduce an inductive bias to our formulation by assigning different and independent roles to content and style when approximating the real data distributions. The content embeddings of individual images are forced to share a common distribution. The style embeddings encoding instancespecific features are used to customize the shared distribution. The experiments on several popular datasets demonstrate that our method achieves the state-of-theart disentanglement compared to other unsupervised approaches and comparable or even better results than supervised methods. Furthermore, as a new application of C-S disentanglement, we propose to generate multi-view images from a single view image for 3D reconstruction.
N/A
Content and style (C-S) disentanglement intends to decompose the underlying explanatory factors of objects into two independent latent spaces. Aiming for unsupervised disentanglement, we introduce an inductive bias to our formulation by assigning different and independent roles to content and style when approximating the real data distributions. The content embeddings of individual images are forced to share a common distribution. The style embeddings encoding instancespecific features are used to customize the shared distribution. The experiments on several popular datasets demonstrate that our method achieves the state-of-theart disentanglement compared to other unsupervised approaches and comparable or even better results than supervised methods. Furthermore, as a new application of C-S disentanglement, we propose to generate multi-view images from a single view image for 3D reconstruction.
1 INTRODUCTION
The disentanglement task aims to recover the underlying explanatory factors of natural images into different dimensions of latent space, and provide an informative representation for tasks like image translation (Wu et al., 2019b; Kotovenko et al., 2019), domain adaptation (Li et al., 2019; Zou et al., 2020) and geometric attributes extraction (Wu et al., 2019c; Xing et al., 2019), etc.
The previous methods (Kim & Mnih, 2018; Higgins et al., 2017; Burgess et al., 2018a; Kumar et al., 2017) learn disentangled factors by optimizing the total correlation in an unsupervised manner. However, Locatello et al. (2019) prove that unsupervised disentanglement is fundamentally impossible without inductive bias on both model and data.
In this paper, we focus on content and style (C-S) disentanglement, where content and style represent two separate groups of factors. The main novelty of our work is that we assign different roles to the content and style in modeling the image distribution instead of treating the factors equally, which is the inductive bias introduced in our method. Most of the previous C-S disentanglement works (Denton & Birodkar, 2017; Jha et al., 2018; Bouchacourt et al., 2018; Gabbay & Hoshen, 2020) rely on supervision, which is hard to obtain for real data. E.g., Gabbay & Hoshen (2020) leverage group observation to achieve disentanglement by forcing images from the same group to share a common embedding. To our best knowledge, the only exception is Wu et al. (2019c). However, this method forces the content path to learn geometric structure limited by 2D landmarks.
Our definition of content and style is similar to Gabbay & Hoshen (2020), where the content includes the information which can be transferred among groups and style is image-specific information. When group observation is not available, we define content includes the factors shared across the whole dataset, such as pose. Take the human face dataset CelebA (Liu et al., 2015) as an example, the content encodes pose, and style encodes identity, and multi-views of the same identity have the same style embeddings, but different content embeddings, i.e., poses.
Based on the above definitions, we propose a new problem formulation and network architecture by introducing an inductive bias: assigning different and independent roles to content and style when approximating the real data distributions. Specifically, as shown in Figure 1, we force the content embeddings of individual images to share a common distribution, and the style embeddings are used to scale and shift the common distribution to match target image distribution via a generator.
We follow Bojanowski et al. (2018) and Gabbay & Hoshen (2020) to apply latent optimization to optimize the embeddings and the parameters of the generator. We also propose to use instance discrimination as a complementary constraint to assist the disentanglement. Please note that we only use the image reconstruction loss as the supervision; no extra labeling is needed. As the content and style perform a different and independent role when modeling the data, they are disentangled to encode the shared and instance-specific features respectively after the optimization.
The contributions of our work are as follows: we achieve unsupervised C-S disentanglement by introducing an inductive bias in our formulation: assign different and independent roles to content and style when modeling the real data distributions. Furthermore, we achieve better C-S disentanglement by leveraging instance discrimination. The experiments on several popular datasets demonstrate that our method achieves the state-of-the-art unsupervised C-S disentanglement and comparable or even better results than supervised methods. Besides, we propose to apply C-S disengagement to a new task: single view 3D reconstruction.
2 RELATED WORK
Unsupervised Disentanglement. A disentangled representation can be defined as one where individual latent units are sensitive to changes in individual generative factors. There have been a lot of studies on unsupervised disentangled representation learning (Higgins et al., 2017; Burgess et al., 2018a; Kumar et al., 2017; Kim & Mnih, 2018; Chen et al., 2018). These models learn disentangled factors by factorizing aggregated posterior. They can also be used for C-S disentanglement. The learned factors can be divided into two categories; one is content-related, the other is style-related. However, Locatello et al. (2019) proved that unsupervised disentanglement is impossible without introducing inductive bias on both models and data. Therefore, these models are currently unable to obtain a promising disentangled representation. Motivated by Locatello et al. (2019), we revisit and formulate the unsupervised C-S disentanglement problem to introduce inductive bias.
C-S Disentanglement. Originated from style transfer, most of the prior works on C-S disentanglement divide latent variables into two spaces relying on supervision. To achieve disentanglement, Mathieu et al. (2016) and Szabó et al. (2018) combine the adversarial constraint and auto-encoders. Meanwhile, VAE (Kingma & Welling, 2014) is used with non-adversarial constraints, such as cycle consistency (Jha et al., 2018) and evidence accumulation (Bouchacourt et al., 2018). Furthermore, latent optimization is shown to be superior to amortized inference (Gabbay & Hoshen, 2020). Unlike the above works, Wu et al. (2019c) propose a variational U-Net with structure learning for disentanglement in an unsupervised manner. However, this method is limited by the learning of 2D landmarks. In our paper, we formulate C-S disentanglement and explore inductive bias for unsupervised disentanglement. Note that style transfer aims at modifying the domain style of an image while preserving its content, and its formulation focuses on the relation between domains (Huang et al., 2018a). Our formulation is defined in a single domain but can be extended to cross-domain, as presented in Appendix G.
3 EXPLORING INDUCTIVE BIAS FOR C-S DISENTANGLEMENT
In this section, We first formulate the C-S disentangle problem by exploring the inductive bias and propose a C-S fusion block based on our formulation. We then perform the ablation study to demonstrate how the C-S get disentangled. Finally, our loss functions are presented.
3.1 PROBLEM FORMULATION
We parametric the target distributionPi(x|c) as P̂θ,si(x|c), where θ is the parameter of the generator Gθ that maps embeddings to images, si is the style embedding assigned to Ii. In our formulation of P̂ , we assign independent roles to content and style embeddings. {ci}Ni=1 are sampled from the distribution of the shared conditional variable c, which is denoted as Ψ. {si}Ni=1 are the parameters to characterize P̂ . Thus our inductive bias is introduced into our formulation. Ψ should be close to the ground truth distribution of the dataset, e.g., Gaussian distribution, Uniform distribution, etc.
Our optimization target is to maximize the log-likelihood of P̂ , and force c to follow the shared distribution Ψ meanwhile:
max θ,ci,si N∑ i=1 EIi∼Pi log P̂θ,si(x = Ii|c = ci),
s.t. KL(p(c)||Ψ) ≤ 0,
(1)
where c = ci indicates ci is the embedding assigned to Ii, and p(c) denotes the distribution of {ci}Ni=1. To solve this problem, we introduce the Lagrange Multiplier as
min θ,ci,si − N∑ i=1 EIi∼Pi log P̂θ,si(x = Ii|c = ci) + λKL(p(c)||Ψ). (2)
3.2 PROPOSED NETWORK ARCHITECTURE
Here we propose a network architecture to address the formulated problem in Section. 3.1. In particular, we design a C-S fusion block to assign different roles to content and style in modeling real data distribution. As shown in Figure 1, we add this block before the generator to force the input to follow the customized distribution.
Inspired by the observation that mean and variance of features carry the style information (Gatys et al., 2016; Li & Wand, 2016; Li et al., 2017; Huang & Belongie, 2017), we use the style embeddings to provide the statistics to scale and shift the shared distribution Ψ to match the target distribution as zi = fσ(si) · ci + fµ(si), (3) where fσ and fµ are two fully connected layers predicting the mean and variance respectively. With this design, Eq. 1 is equivalent to minimize
min θ,ci,si N∑ i=1 ‖Ii −Gθ(zi)‖+ λKL(p(c)||Ψ). (4)
Please refer to Appendix G for the proof. To solve Eq. 4, for the reconstruction term, we adopt latent optimization to optimize θ, {ci}Ni=1, {si}Ni=1. For the KL term, it can not be optimised directly. However, when Ψ has some specific forms, we can adopt a normalization to force each of the content embeddings ci to follow it approximately.
We can choose different forms of Ψ to fit the ground truth distribution when solving the optimization problem, here we provide two examples:
Gaussian Distribution. Gaussian distribution is used to model the distribution of images in many works (Kingma & Welling, 2014; Kim & Mnih, 2018; Higgins et al., 2017). When setting the shared distribution Ψ as a zero mean, unit variance Gaussian distributionN (0, I), we can use instance normalization (IN) to force each of the content embeddings ci to follow N (0, I) in the optimization
process. By combining IN and Eq. 3, we get the same formulation as AdaIN (Huang & Belongie, 2017), which is widely adopted in style transfer tasks (Huang et al., 2018a; Kotovenko et al., 2019). Normalizing the feature map of the network to Gaussian is helpful for network training (Ioffe & Szegedy, 2015; Wu & He, 2018), but our motivation for using normalization is to force the embeddings to share the same Gaussian distribution, which differs from these works.
Uniform Distribution. In many datasets, the distribution of content is close to a uniform distribution, e.g., in the Chairs (Aubry et al., 2014) dataset, the images are synthesized from dense views surrounding the objects. For these datasets, we set Ψ to be uniform distribution, and normalize the content embeddings with L2 normalization to force each of them to follow uniform distribution approximately (Muller, 1959).
As shown in Figure 1, we can use this C-S fusion block only before the generator, denoted as the Single C-S Fusion framework. We can also provide multiple paths to implement our design by inserting it before every layer of the generator, denoted as a Multiple C-S Fusion framework. For details of the network structures, please refer to Appendix A.2.
3.3 DEMYSTIFYING C-S DISENTANGLEMENT
In this subsection, we perform some experiments to verify that assigning different and independent roles to content and style for modeling real data distribution is the key to C-S disentanglement. The experimental setting can be found in Section 4.
If we do not assign different roles, i.e., concatenating content and style embedding as the input of the generator, the network can hardly disentangle any meaningful information for the CelebA dataset, as shown in Figure 2 (a). Our Single C-S Fusion framework can disentangle the pose and identity of human faces, as shown in Figure 2 (c). The content plays the role of modeling the shared distribution. When the shared distribution constraint is removed, i.e., without normalization, the result is shown in Figure 2 (b), where the pose and identity can not be disentangled. For the Multiple C-S Fusion framework, multiple paths are provided, and the network has more flexibility to approximate the target distribution and outperforms the Single C-S Fusion framework, as shown in Figure 2 (d).
Since the shared distribution is crucial, we experiment to demonstrate that better disentanglement can be achieved by choosing a better distribution to fit the dataset. For the real-world dataset CelebA, the distribution of pose is better modeled as a Gaussian distribution. As Figure 4 (a) and (b) show, IN achieves better disentanglement than L2. For the synthetic Chairs (Aubry et al., 2014) dataset, the distribution of pose is close to uniform distribution rather than Gaussian distribution. Figure 4 (c) and (d) show that the L2 normalization results in better identity and pose consistency.
To better understand how our design helps to guide the disentanglement, we visualize the generated images during the training process in Figure 3. As the generated images show, a mean shape of faces is first learned. Then the faces start to rotate, which indicates the pose is disentangled to the content space. After that, the identity features emerge as the style starts to learn parameters for customizing the shared distribution to approximate the real faces distribution.
3.4 LOSS FUNCTION
Perceptual Loss. Perceptual Loss is widely used in weakly supervised and unsupervised methods (Wu et al., 2020; Ren et al., 2020; Wu et al., 2019c). Gabbay & Hoshen (2020) claimed that perceptual loss is not extra supervision for disentanglement. We adopt a VGG (Simonyan & Zisserman, 2015) perceptual loss LP as a reconstruction loss in Eq. 4, implemented by Hoshen et al. (2019).
Instance Discrimination. Instance discrimination can automatically discover appearance similarity among semantic categories (Wu et al., 2018). Inspired by this, we propose to use instance discrimination as a complementary constraint to enhance consistency among the images sharing the same style embeddings. We denote the instance discrimination loss as LID. The implementation detail can be found in Appendix C.3.
Information Bottleneck. Burgess et al. (2018a) propose improving the disentanglement in β-VAE by controlling the capacity increment, i.e., forcing the KL divergence to be a controllable value. This motivated us to control the information bottleneck capacity of content and style to help to avoid leakage. This loss is denoted as LIB. The details of this loss are provided in Appendix C.4. Our full objective is
wPLP + wIBLIB + wIDLID, (5) where hyperparameters wP , wIB , and wID represent the weights for each loss term respectively. The ablation study for the loss terms is presented in Appendix E.
4 EXPERIMENTS
In this section, we perform quantitative and qualitative experiments to evaluate our method on seen data following common practice. We test our method on several datasets: Car3D (Reed et al., 2015), Chairs (Aubry et al., 2014), CelebA (Liu et al., 2015). For details of the datasets, please refer to Appendix B.
Baselines. Among all the prior works, we choose several state-of-the-art class-supervised C-S disentanglement benchmarks for comparisons: Cycle-VAE (Jha et al., 2018), a variant of VAE using cycle-consistency; DrNet (Denton & Birodkar, 2017), an adversarial approach; Lord (Gabbay & Hoshen, 2020), a latent optimization method. We also choose two unsupervised disentanglement methods: FactorVAE (Kim & Mnih, 2018), a method that encourages the distribution of representations to be factorial; Wu et al. (2019c) 1, a two-branch VAE framework based on unsupervised structure learning. More details for baselines are presented in Appendix B.
1There is no open-sourced implementation for it. We modify https://github.com/CompVis/ vunet and provide pseudo ground truth landmarks to the network. Thus it becomes semi-supervised.
4.1 QUANTITATIVE EXPERIMENTS
We compare our method (Multiple C-S Fusion framework) with the baselines on Car3D, Chairs and CelebA.
Content Transfer Metric. To evaluate our method’s disentanglement ability, we follow the protocol of Gabbay & Hoshen (2020) to measure the quality of content transfer by LPIPS (Zhang et al., 2018). Details are presented in Appendix A.1. The results are shown in Table 1. We achieve the best performance among the unsupervised methods, even though pseudo label is provided for Wu et al. (2019c). Furthermore, our method is comparable to or even better than the supervised ones.
Classification Metric. Classification accuracy is used to evaluate disentanglement in the literature (Denton & Birodkar, 2017; Jha et al., 2018; Gabbay & Hoshen, 2020). Following Jha et al. (2018), we train two models of a single fully-connected layer to classify content labels from style embeddings and classify style labels from content embeddings. Low classification accuracy indicates that the leakage between content and style is small. Due to no content annotations of CelebA, we regress the position of the facial landmarks from the style embeddings. The results are summarized in Table 2. Though without supervision, our method is comparable to several methods. We observe that the classification metric is also influenced by information capacity and dimensions of embeddings. For FactorVAE (Kim & Mnih, 2018), the poor reconstruction quality indicates that the latent embeddings encode a very small amount of information that can hardly be classified. The dimensions of the latent vectors of different methods vary from ten to hundreds. Actually, the higher dimension usually leads to easier classification. Based on the above observations, the classification metric may not be appropriate for disentanglement, which is also observed in Liu et al. (2020).
4.2 QUALITATIVE EXPERIMENTS
Disentanglement & alignment. In Figure 5 (a) and (b), we conduct linear interpolation to show the variation in the two latent manifolds. Horizontally, the identity is changed smoothly with the interpolated style latent space while maintaining the pose information. Vertically, the identity remains the same as the pose changes. These results illustrate the following points: 1) The learned content
and style spaces are continuous. 2) Columns of the left and right figures share the same pose, suggesting that the learned content spaces are aligned. 3) Style-related information is maintained when changing the content embedding and vice versa, suggesting the good disentanglement.
We perform retrieval on the content and style latent spaces, respectively. As shown in Figure 5 (c) and (d), the nearest neighbors in the content space share the same pose but have different identities, which reveals the alignment on content space. To better identify the faces, we let the nearest neighbors in the style space share the same pose, and the generated faces look very similar, revealing that the style is well maintained. As shown in Figure 5 (f), one interesting observation is that zero content embeddings lead to a canonical view. As we assume that the pose distribution of faces is N (0, I), the canonical view is the most common pose in the dataset and sampled from the peak of this distribution. We also show the faces with zero style embeddings in Figure 5 (e), and it looks like the mean face of the dataset.
Visual Analogy & Comparison. Visual analogy (Reed et al., 2015) is to switch between style and content embeddings for each pair within a testset. We show the visual analogy results of our method against FactorVAE (Kim & Mnih, 2018) (typical unsupervised baseline) and Lord (strongest supervised baseline) in Figure 8 on Chairs, Car3D, and CelebA. The results of FactorVAE on all datasets have poor reconstruction quality and bad content transfer. On Cars3D, Lord has artifacts (e.g., third column) and could not capture the color style of the test images (e.g., fourth row). On CelebA, the transfer result of Lord has ambiguity, e.g., the content embedding controls facial expression in the fifth column, while other content embeddings do not control expression. Our method achieves comparable pose transfer to Lord and maintains the identity of the images. For more results (including other datasets), please refer to Appendix D.
Comparsion with Image Translation. Starting with our assumption that content embeddings share the same distribution, and leveraging C-S fusion block, we achieve unsupervised content and style disentanglement without needing “swapping” operation and GAN loss constraint to extract the shared content information as image translation works (MUNIT (Huang et al., 2018b) and Park et al. (2020)) do. As shown in Figure 6, for MUNIT (Huang et al., 2018b) and Park et al. (2020), the
content is low-level structure information. While in our case, the content is a high-level semantic attribute of the object, e.g., the pose attribute. As shown in Figure 6 (d), we can also achieve the similar performance to exchange the tone of the images by exchanging the fine style. The fine styles in our method are the style inputs of the last C-S fusion block in the multiple C-S fusion framework.
4.3 UNSEEN IMAGES INFERENCE
Though we learn to disentangle in an unsupervised manner, we may need to process unseen images. An intuitive solution is to train encoders to encode images to the latent spaces. We train style encoder Es and content encoder Ec by minimizing
LE = ‖Es(Ii)− si‖1 + ‖Ec(Ii)− ci‖1. (6)
We apply our model trained on the CelebA dataset to faces collected by Wu et al. (2020) including paintings and cartoon drawings. As shown in Figure 7, our method can be well generalized to unseen images from different domains.
5 NEW APPLICATION
In this work, we explore a new application of C-S disentanglement. For 3D reconstruction, singleview settings lack reliable 3D constraints, which can cause unresolvable ambiguities (Wu et al., 2019a). Thanks to our disentangled representations, we can generate multi-view images from a single view by extracting the style embedding of the single view and then combining it with multiple content embeddings. On Chairs, we adopt Pix2Vox (Xie et al., 2019), a framework for single-view, and multi-view 3D reconstruction to verify the advantages of our method. As shown in Figure 9, the 3D objects reconstructed from multi-view inputs generated from our method are much better than those reconstructed from a single view, and even comparable to those reconstructed from groundtruth multi-view images. For results on Celeba, please refer to Appendix D.3.
6 CONCLUSION
We present an unsupervised C-S disentanglement method, based on an inductive bias: assigning different and independent roles to content and style when approximating the real data distributions. Our method outperforms other unsupervised approaches and achieves comparable to or even better performance than the state-of-the-art supervised methods. We also propose to use it to help single-view 3D reconstruction, as a new application of C-S disentanglement. As for the limitation, we fail on
datasets containing multiple categories with large appearance variation, e.g., CIFAR-10 (Krizhevsky et al., 2009), which does not match our assumption. Our method could be adopted to help downstream tasks, e.g., domain translation, ReID, etc. An interesting direction is to apply our method to instance discrimination. With disentangled representations, contrastive learning is expected to perform more effectively.
B BASELINE DETAILS
For the datasets in the main paper, Car3D contains 183 car models, each rendered from 96 poses. Chairs consists of 1393 chair models, each rendered from 62 poses. CelebA contains 202,599 facial images of 10,177 celebrities.
For the baselines, we use open-sourced implementations for Cycle-VAE (Jha et al., 2018) 2, DrNet (Denton & Birodkar, 2017) 3, Lord (Gabbay & Hoshen, 2020) 4 and FactorVAE (Kim & Mnih, 2018) 5.
For FactorVAE, we traverse the latent space to select the dimensions related to pose as content embedding and treat the other dimensions as style embedding. For Wu et al. (2019c), there is no opensourced implementation. We use the code from https://github.com/CompVis/vunet,
2https://github.com/ananyahjha93/cycle-consistent-vae 3https://github.com/ap229997/DRNET 4https://github.com/avivga/lord-pytorch 5https://github.com/1Konny/FactorVAE
which uses ground truth landmarks as input instead of learning the landmarks unsupervisedly. To achieve the pseudo ground truth landmarks, we use the face detection library (Bulat & Tzimiropoulos, 2017) for Celeba. We try to use the L1 and perceptual loss for all the baselines and select the best.
We split the datasets into training and testing sets. For Celeba, we randomly select 1000 among 10177 celebrities for testing. For Car3D, we randomly select 20 among 183 CAD models for testing. For Chairs, we randomly select 100 among 1393 models for testing. For baselines with group supervision, only the training sets are used for training. For unsupervised baselines and our method, all the datasets are used for training.
C TECHNICAL COMPONENTS
Here we present three technical components that are helpful to the C-S disentanglement. The ablation study for these components is shown in Appendix E.
C.1 LATENT OPTIMIZATION.
In the C-S disentanglement literature, it is common to use encoders to predict embeddings, while latent optimization (Bojanowski et al., 2018; Gabbay & Hoshen, 2020) directly optimizes the embeddings via back-propagating without using encoders. Encoders have a large number of parameters and require a lot more extra effort for training. Therefore, We adopt the latent optimization approach to update the latent spaces directly.
C.2 REPARAMETRIC MODULE
Inspired by VAE (Kingma & Welling, 2014), we design a reparametric module to force the latent space to be continuous. Thus, the embeddings encoding similar information will get closer in the latent space. Assume we have a mean embedding µ with a standard deviation σ, the reparametrized output is σX + µ, where X ∼ N (0, I) . To further simplify the problem, we set σ = 1 following Wu et al. (2019c) and Gabbay & Hoshen (2020). The mean embedding is the input style or content embedding. The reparametric module can make the latent space continuous, which is helpful for backpropagation. Though the training images have discrete identities, the optimized style embedding space is continuous. Different style embeddings of the people with similar appearances are close to each other, as shown in Figure 5 (c).
C.3 INSTANCE DISCRIMINATION LOSS
We first pretrain a ResNet-18 (He et al., 2016) Φ and define a collection of layers of Φ as {Φl}. Among several representative methods (Wu et al., 2018; Ye et al., 2019; He et al., 2020), we observe that the method in Wu et al. (2018) achieves the best performance in our task. Given two images Ii and Ij , we mix the embeddings to generate u = G(R(si), R(cj)) and v = G(R(sj), R(ci)). For samples sharing the same style embedding, we enforce the feature distance in Φ between them to be close. This loss term can be written as
LID = ∑ l λl(‖Φl(u)− Φl(x)‖1 + ‖Φl(v)− Φl(y)‖1), (7)
where x = G(R(si), R(ci)) and y = G(R(sj), R(cj)). The hyperparameters {λl} balance the contribution of each layer l to the loss. {λl} are set to be [1, 1, 1, 1, 1]
C.4 INFORMATION BOTTLENECK
Similar to Anneal VAE (Burgess et al., 2018a), we introduce a information bottleneck given by
LIB = γs‖s2 − Cs‖1 + γc‖c2 − Cc‖1 (8)
where Cs and Cc are the information capacity controlling the amount of information of the content and style respectively. During training,Cs andCc increase linearly. The rate of increase is controlled by the increase steps and the maximum value. By controlling the increase rate, the content is forced
to encode information first, so that the learning process is more consistent with our assumptions about the data: the shared conditional variable c is learned first.
For the information bottleneck, by taking the training process of the model without the information bottleneck as a reference, we determine the increase steps and the maximum of the information capacity Cc and Cs. We can enhance the model inductive bias by tuning these parameters. For Chairs, we set the maximum of Cc to 5, the start value of Cc to 2, the increase steps of Cc to 1.4× 105, γc to 1 and γs to 0. Note that our model achieves state-of-the-art performance on Chairs even without information bottleneck.
D MORE RESULTS
In this section, we demonstrate more qualitative comparison and more qualitative results (including more datasets).
D.1 MORE QUALITATIVE EXPERIMENTS
In the main paper, for unsupervised baselines, we only compare our method with FactorVAE (Kim & Mnih, 2018) limited to space. As shown in Figure 11, we also outperform Wu et al. (2019c). For Wu et al. (2019c), the disentanglement is poor, such that the content embeddings control almost all the factors while the style embeddings control the tone.
For datsets in the main paper, We provide more qualitative results in Fig. 24, 25, 26, 27 and 28. Moreover, we also apply our method on higher resolution images and achieve good performance, as shown in Figure 20.
D.2 MORE DATASETS
Besides the datasets introduced in the main paper, we make additional experiments on other datasets: such as MNIST (LeCun et al., 2010), Cat (Parkhi et al., 2012; Zhang et al., 2008), Anime (Chao, 2019) and Market-1501 (Zheng et al., 2015). MNIST has 70k examples for 10 handwritten digits. Cat has 1.2k cat head images. Anime contains 63,632 anime faces. Market-1501 have 25,259 images. The results are shown in Figure 21, 22 ,23. Furthermore, we show our results on the Market-1501 dataset in Figure 19, which demonstrates our method can disentangle the human pose and the appearance even though the skeletons have large variances.
D.3 MORE 3D RECONSTRUCTION
Our setting treats every image as a single identity (style) without ambiguity for augmenting singleview images. On Celeba, We use MVF-Net (Wu et al., 2019a) based on multi-view to reconstruct 3D facial shapes. For a given image, we can get the corresponding style embedding content embedding. Then we can get the front, left, and right view of this image combining the extracted style embedding and prepared content embeddings 6. As shown in Figure 13, our augmented multi-view images are consistent, and the 3D meshes based on our method are more accurate than those based on Lord.
E MORE ABLATION STUDY
Here we perform more ablation study for the technical modules.
If we use an amortized scheme instead of a latent optimization scheme, there are leaks between style and content latent space, and the result is worse than latent optimization, as shown in Figure 12 (a) and (c). Furthermore, if we do not use a reparametric module, we find the reconstruction performance is worse, as shown in Figure 12 (b). For the instance discrimination loss, the comparison is shown in Table 4. The disentanglement is better with an instance discrimination loss. For the information bottleneck, as shown in Table 3, the result with an information bottleneck is much better than the one without it.
F COMPARISON WITH SELECTED RELATED WORK
Comparison with StyleGAN. In our framework, the optimized content (conv) and style embeddings are disentangled representations of corresponding images. While StyleGAN (Karras et al., 2019) keeps the input of the convolution branch as a learnt constant for the whole dataset and finds the feature space of the “style” branch has disentanglement ability. For StyleGAN2 (Karras et al., 2020) 7, we select the subset of “style”, which represents pose, as the content embedding and the rest subset as the style embedding. As shown in Figure 15, StyleGAN2 entangled pose with other semantic attributes, such as hair and glasses. As shown in Figure 28, the content of our method on human faces is pose attribute without entanglement.
Comparsion with MUNIT & Park et al. (2020). Starting with our assumption that content embeddings share the same distribution, and leveraging AdaIN-like operation, we achieve unsupervised content and style disentanglement without needing “swapping” operation and GAN loss constraint to extract the shared content information as image translation works (MUNIT (Huang et al., 2018b) and Park et al. (2020)) do. As shown in Figure 14, for MUNIT (Huang et al., 2018b) and Park et al. (2020), the content is low-level structure information. While in our case, the content is a high-level semantic attribute of the object, e.g., the pose attribute. As shown in Figure 14 (d), we can also achieve the similar performance to exchange the tone of the images by exchanging the fine style.
7We use the implementation from https://github.com/rosinality/stylegan2-pytorch.
The fine styles in our method are the style inputs of the last C-S fusion block in the multiple C-S fusion framework.
G CROSS-DOMAIN APPLICATION
As shown in the main paper, the content and style are disentangled in a single domain. Based on our assumption, the cross-domain dataset also can be disentangled. In this section, we test our model on a cross-domain dataset to further verify our assumption. In some cases that we merge images from two domains, our method can still work and achieve performance, which is similar to domain translation. For example, Edges2Shoes (Yu & Grauman, 2014) is a dataset consisting of 50k paired shoe and edge map images. As shown in Figure 16, the content is edge structure, and the style is texture. Thanks to this, we can translate edge images into shoe images and vice versa without any additional operation.
Furthermore, once the domain labels are given, we can disentangle and align the cross-domain dataset. This experiment may be helpful for domain transfer and domain adaptation. We train our model on the dataset that consists of Celeba and Anime. The model needs to be modified for learning cross-domain data: concatenate the domain embedding and the style embedding, take it as the style embedding in the original model, and optimize the domain embedding during latent optimization. The results are shown in Figure 17. The learned poses are well aligned both in the animation and reality domain.
H PROOF
Our optimization target is to minimize the KL divergence between P and Q,
min θ,ci,si N∑ i=1 KL(Pi(x|c = ci)||Qθ,si(x|c = ci)). (9)
Expanding the above KL term, we have,
min θ,ci,si N∑ i=1 ∫ x Pi(x|c = ci) log Pi(x|c = ci) Qθ,si(x|c = ci) dx. (10)
The above integral equation cannot be directly calculated, but can be estimated by the sampled images {Ij} ∼ Pi,
min θ,ci,si N∑ i=1 ∑ Ij∼Pi Pi(x = Ij |c = ci) log Pi(x = Ij |c = ci) Qθ,si(x = Ij |c = ci) . (11)
Separating P and Q from the above equation by logarithmic transformation, we have
min θ,ci,si N∑ i=1 ∑ Ij∼Pi Pi(x = Ij |c = ci) logPi(x = Ij |c = ci)
− Pi(x = Ij |c = ci) logQθ,si(x = Ij |c = ci).
(12)
Since Pi(x = Ij |c = ci) is the dataset distribution, which is an unknown constant distribution, therefore, the first term is a constant, the optimization target is equivalent to
max θ,ci,si N∑ i=1 ∑ Ij∼Pi Pi(x = Ij |c = ci) logQθ,si(x = Ij |c = ci). (13)
Rewriting it into mathematical expectation form, we have
max θ,ci,si N∑ i=1 EIj∼Pi logQθ,si(x = Ij |c = ci), (14)
where Pi refers to Pi(x = Ij |c = ci). Our optimization target is equivalent to maximum likelihood estimation. Here we assume Q is a Gaussian distribution,
Qθ,si(x|c = ci) = 1√ 2πσ exp
( − 1
2σ2 ‖x−Gθ(si, ci)‖22
) . (15)
Combining Eq. 14 and Eq. 15, we have
max θ,ci,si N∑ i=1 ( − 1 2σ2 ‖Ii −Gθ(si, ci)‖22 ) . (16)
Consequently, the final optimization target is
min θ,ci,si N∑ i=1 ‖Ii −Gθ(zi)‖22. (17)
Q.E.D. | 1. What is the main contribution of the paper regarding disentanglement content from style?
2. What are the strengths and weaknesses of the proposed method compared to other methods in the literature?
3. Do you have any questions or concerns about the training objective and the use of losses in the paper?
4. How does the reviewer assess the clarity and completeness of the explanation in Section 3.1?
5. Are there any concerns regarding the optimization of the KL divergence term in Equation 3?
6. How do the free parameters s_i and c_i affect the application of the method to new samples?
7. Is the loss function in Equation 3 part of the overall training objective? If not, what is its purpose?
8. How does the reviewer evaluate the thoroughness of exploration of different options for combining embeddings in the paper?
9. Does the reviewer think that the paper adequately compares the proposed method to existing methods in the literature?
10. Are there any suggestions for improving the paper or providing more thorough explanations? | Review | Review
Summary: This paper presents a novel combination of losses to learn representations that disentangle content from style. An ablation study is done with a few variants of the model, and extensive quantitative and qualitative experimental results on content/style disentangling demonstrate the model’s performance.
Overall the experimental results look very nice and seem to compare well to other methods both qualitatively and quantitatively. However, the scope of the contribution seems limited since the losses have already been introduced in the literature. The novelty seems to be in the embedding combination method, which is not explored as thoroughly as I’d expect. Also, the grounding of the proposed loss as a probabilistic model seems lacking to me.
Strengths:
Impressive looking results on style transfer in a few distinct domains, including faces (celeba), chairs, and Car3D.
Simple method that could in principle be applied to a variety of domains.
Weaknesses:
This paper would be stronger if it dispensed with the incorrect formalism introduced in Section 3, and introduced the training objective as a set of standard loss functions as in Eq. 4.
If this paper is about methods for combining embeddings for content/style recombination, I would hope to see a more thorough exploration of the different options. For example, Eq. 2 is close to a bilinear model, as used in [4]. It would be interesting to see this combination method compared to explicitly, as it is a well known method for linear content/style recombination. It seems like only two methods (concatenation and the proposed method) were tried.
The proposed method seems to be an ad-hoc combination of losses already introduced in the literature.
“most previous C-S disentanglement works rely on supervision, which is hard to obtain for real data” --- full supervision of both content and style is difficult to obtain, but supervision of one variable alone is extremely common and represents the basis for most modern content/style decomposition methods. For example, see generic content/style decomposition models based on GANs [1, 3], and VAEs [2].
Denton & Birodkar 2017 does not use full supervision of content and style, rather they assume that the static components in the video sequence represent content, and everything else represents style.
Page 1 Paragraph 3: This definition of content --- as factors shared across the whole dataset --- doesn’t make sense to me. Each face image has both an identity and a pose, so why is one content and not the other? It seems to me that the distinction is somewhat arbitrary, and probably determined by a target downstream task (e.g., face classification).
Clarity:
Section 3.1: This needs a more complete explanation. A few detailed questions follow: ** Section 3.1: Is Q conditioned on s as well as c? ** Equation 1: Since P(x|c) is NOT conditioned on s, wouldn’t this objective function encourage Q to be independent of s, i.e. Q(x|c,s) = Q(x | c) ? ** Where is \Psi in Eq. 1 ?
Eq. 3 - I don’t follow how this is optimizing KL( P(x | c) || Q_{s, \theta}(x | c ) ). Please give a derivation.
Eq. 2+3 s_i, c_i are free parameters that the network is allowed to optimize, so how are images not in the training set dealt with? Does it require training a new s_i, c_i for each new sample?
Which loss is Eq. 3 part of? I don’t see it listed in Eq. 4.
Rating: References: [1] Xun Huang, Ming-Yu Liu, Serge Belongie, and Jan Kautz. Multimodal unsupervised image-to-image translation. InECCV, 2018. [2] Diederik P Kingma, Shakir Mohamed, Danilo Jimenez Rezende, and Max Welling. Semi-supervisedlearning with deep generative models. InAdvances in Neural Information Processing Systems,pp. 3581–3589, 2014 [3] Michael F Mathieu, Junbo Jake Zhao, Junbo Zhao, Aditya Ramesh, Pablo Sprechmann, and377Yann LeCun. Disentangling factors of variation in deep representation using adversarial train-378ing. InAdvances in Neural Information Processing Systems, pp. 5040–5048, 2016 [4] Tenenbaum, J. B., & Freeman, W. T. (2000). Separating style and content with bilinear models. Neural computation, 12(6), 1247-1283. |
ICLR | Title
Rethinking Content and Style: Exploring Bias for Unsupervised Disentanglement
Abstract
Content and style (C-S) disentanglement intends to decompose the underlying explanatory factors of objects into two independent latent spaces. Aiming for unsupervised disentanglement, we introduce an inductive bias to our formulation by assigning different and independent roles to content and style when approximating the real data distributions. The content embeddings of individual images are forced to share a common distribution. The style embeddings encoding instancespecific features are used to customize the shared distribution. The experiments on several popular datasets demonstrate that our method achieves the state-of-theart disentanglement compared to other unsupervised approaches and comparable or even better results than supervised methods. Furthermore, as a new application of C-S disentanglement, we propose to generate multi-view images from a single view image for 3D reconstruction.
N/A
Content and style (C-S) disentanglement intends to decompose the underlying explanatory factors of objects into two independent latent spaces. Aiming for unsupervised disentanglement, we introduce an inductive bias to our formulation by assigning different and independent roles to content and style when approximating the real data distributions. The content embeddings of individual images are forced to share a common distribution. The style embeddings encoding instancespecific features are used to customize the shared distribution. The experiments on several popular datasets demonstrate that our method achieves the state-of-theart disentanglement compared to other unsupervised approaches and comparable or even better results than supervised methods. Furthermore, as a new application of C-S disentanglement, we propose to generate multi-view images from a single view image for 3D reconstruction.
1 INTRODUCTION
The disentanglement task aims to recover the underlying explanatory factors of natural images into different dimensions of latent space, and provide an informative representation for tasks like image translation (Wu et al., 2019b; Kotovenko et al., 2019), domain adaptation (Li et al., 2019; Zou et al., 2020) and geometric attributes extraction (Wu et al., 2019c; Xing et al., 2019), etc.
The previous methods (Kim & Mnih, 2018; Higgins et al., 2017; Burgess et al., 2018a; Kumar et al., 2017) learn disentangled factors by optimizing the total correlation in an unsupervised manner. However, Locatello et al. (2019) prove that unsupervised disentanglement is fundamentally impossible without inductive bias on both model and data.
In this paper, we focus on content and style (C-S) disentanglement, where content and style represent two separate groups of factors. The main novelty of our work is that we assign different roles to the content and style in modeling the image distribution instead of treating the factors equally, which is the inductive bias introduced in our method. Most of the previous C-S disentanglement works (Denton & Birodkar, 2017; Jha et al., 2018; Bouchacourt et al., 2018; Gabbay & Hoshen, 2020) rely on supervision, which is hard to obtain for real data. E.g., Gabbay & Hoshen (2020) leverage group observation to achieve disentanglement by forcing images from the same group to share a common embedding. To our best knowledge, the only exception is Wu et al. (2019c). However, this method forces the content path to learn geometric structure limited by 2D landmarks.
Our definition of content and style is similar to Gabbay & Hoshen (2020), where the content includes the information which can be transferred among groups and style is image-specific information. When group observation is not available, we define content includes the factors shared across the whole dataset, such as pose. Take the human face dataset CelebA (Liu et al., 2015) as an example, the content encodes pose, and style encodes identity, and multi-views of the same identity have the same style embeddings, but different content embeddings, i.e., poses.
Based on the above definitions, we propose a new problem formulation and network architecture by introducing an inductive bias: assigning different and independent roles to content and style when approximating the real data distributions. Specifically, as shown in Figure 1, we force the content embeddings of individual images to share a common distribution, and the style embeddings are used to scale and shift the common distribution to match target image distribution via a generator.
We follow Bojanowski et al. (2018) and Gabbay & Hoshen (2020) to apply latent optimization to optimize the embeddings and the parameters of the generator. We also propose to use instance discrimination as a complementary constraint to assist the disentanglement. Please note that we only use the image reconstruction loss as the supervision; no extra labeling is needed. As the content and style perform a different and independent role when modeling the data, they are disentangled to encode the shared and instance-specific features respectively after the optimization.
The contributions of our work are as follows: we achieve unsupervised C-S disentanglement by introducing an inductive bias in our formulation: assign different and independent roles to content and style when modeling the real data distributions. Furthermore, we achieve better C-S disentanglement by leveraging instance discrimination. The experiments on several popular datasets demonstrate that our method achieves the state-of-the-art unsupervised C-S disentanglement and comparable or even better results than supervised methods. Besides, we propose to apply C-S disengagement to a new task: single view 3D reconstruction.
2 RELATED WORK
Unsupervised Disentanglement. A disentangled representation can be defined as one where individual latent units are sensitive to changes in individual generative factors. There have been a lot of studies on unsupervised disentangled representation learning (Higgins et al., 2017; Burgess et al., 2018a; Kumar et al., 2017; Kim & Mnih, 2018; Chen et al., 2018). These models learn disentangled factors by factorizing aggregated posterior. They can also be used for C-S disentanglement. The learned factors can be divided into two categories; one is content-related, the other is style-related. However, Locatello et al. (2019) proved that unsupervised disentanglement is impossible without introducing inductive bias on both models and data. Therefore, these models are currently unable to obtain a promising disentangled representation. Motivated by Locatello et al. (2019), we revisit and formulate the unsupervised C-S disentanglement problem to introduce inductive bias.
C-S Disentanglement. Originated from style transfer, most of the prior works on C-S disentanglement divide latent variables into two spaces relying on supervision. To achieve disentanglement, Mathieu et al. (2016) and Szabó et al. (2018) combine the adversarial constraint and auto-encoders. Meanwhile, VAE (Kingma & Welling, 2014) is used with non-adversarial constraints, such as cycle consistency (Jha et al., 2018) and evidence accumulation (Bouchacourt et al., 2018). Furthermore, latent optimization is shown to be superior to amortized inference (Gabbay & Hoshen, 2020). Unlike the above works, Wu et al. (2019c) propose a variational U-Net with structure learning for disentanglement in an unsupervised manner. However, this method is limited by the learning of 2D landmarks. In our paper, we formulate C-S disentanglement and explore inductive bias for unsupervised disentanglement. Note that style transfer aims at modifying the domain style of an image while preserving its content, and its formulation focuses on the relation between domains (Huang et al., 2018a). Our formulation is defined in a single domain but can be extended to cross-domain, as presented in Appendix G.
3 EXPLORING INDUCTIVE BIAS FOR C-S DISENTANGLEMENT
In this section, We first formulate the C-S disentangle problem by exploring the inductive bias and propose a C-S fusion block based on our formulation. We then perform the ablation study to demonstrate how the C-S get disentangled. Finally, our loss functions are presented.
3.1 PROBLEM FORMULATION
We parametric the target distributionPi(x|c) as P̂θ,si(x|c), where θ is the parameter of the generator Gθ that maps embeddings to images, si is the style embedding assigned to Ii. In our formulation of P̂ , we assign independent roles to content and style embeddings. {ci}Ni=1 are sampled from the distribution of the shared conditional variable c, which is denoted as Ψ. {si}Ni=1 are the parameters to characterize P̂ . Thus our inductive bias is introduced into our formulation. Ψ should be close to the ground truth distribution of the dataset, e.g., Gaussian distribution, Uniform distribution, etc.
Our optimization target is to maximize the log-likelihood of P̂ , and force c to follow the shared distribution Ψ meanwhile:
max θ,ci,si N∑ i=1 EIi∼Pi log P̂θ,si(x = Ii|c = ci),
s.t. KL(p(c)||Ψ) ≤ 0,
(1)
where c = ci indicates ci is the embedding assigned to Ii, and p(c) denotes the distribution of {ci}Ni=1. To solve this problem, we introduce the Lagrange Multiplier as
min θ,ci,si − N∑ i=1 EIi∼Pi log P̂θ,si(x = Ii|c = ci) + λKL(p(c)||Ψ). (2)
3.2 PROPOSED NETWORK ARCHITECTURE
Here we propose a network architecture to address the formulated problem in Section. 3.1. In particular, we design a C-S fusion block to assign different roles to content and style in modeling real data distribution. As shown in Figure 1, we add this block before the generator to force the input to follow the customized distribution.
Inspired by the observation that mean and variance of features carry the style information (Gatys et al., 2016; Li & Wand, 2016; Li et al., 2017; Huang & Belongie, 2017), we use the style embeddings to provide the statistics to scale and shift the shared distribution Ψ to match the target distribution as zi = fσ(si) · ci + fµ(si), (3) where fσ and fµ are two fully connected layers predicting the mean and variance respectively. With this design, Eq. 1 is equivalent to minimize
min θ,ci,si N∑ i=1 ‖Ii −Gθ(zi)‖+ λKL(p(c)||Ψ). (4)
Please refer to Appendix G for the proof. To solve Eq. 4, for the reconstruction term, we adopt latent optimization to optimize θ, {ci}Ni=1, {si}Ni=1. For the KL term, it can not be optimised directly. However, when Ψ has some specific forms, we can adopt a normalization to force each of the content embeddings ci to follow it approximately.
We can choose different forms of Ψ to fit the ground truth distribution when solving the optimization problem, here we provide two examples:
Gaussian Distribution. Gaussian distribution is used to model the distribution of images in many works (Kingma & Welling, 2014; Kim & Mnih, 2018; Higgins et al., 2017). When setting the shared distribution Ψ as a zero mean, unit variance Gaussian distributionN (0, I), we can use instance normalization (IN) to force each of the content embeddings ci to follow N (0, I) in the optimization
process. By combining IN and Eq. 3, we get the same formulation as AdaIN (Huang & Belongie, 2017), which is widely adopted in style transfer tasks (Huang et al., 2018a; Kotovenko et al., 2019). Normalizing the feature map of the network to Gaussian is helpful for network training (Ioffe & Szegedy, 2015; Wu & He, 2018), but our motivation for using normalization is to force the embeddings to share the same Gaussian distribution, which differs from these works.
Uniform Distribution. In many datasets, the distribution of content is close to a uniform distribution, e.g., in the Chairs (Aubry et al., 2014) dataset, the images are synthesized from dense views surrounding the objects. For these datasets, we set Ψ to be uniform distribution, and normalize the content embeddings with L2 normalization to force each of them to follow uniform distribution approximately (Muller, 1959).
As shown in Figure 1, we can use this C-S fusion block only before the generator, denoted as the Single C-S Fusion framework. We can also provide multiple paths to implement our design by inserting it before every layer of the generator, denoted as a Multiple C-S Fusion framework. For details of the network structures, please refer to Appendix A.2.
3.3 DEMYSTIFYING C-S DISENTANGLEMENT
In this subsection, we perform some experiments to verify that assigning different and independent roles to content and style for modeling real data distribution is the key to C-S disentanglement. The experimental setting can be found in Section 4.
If we do not assign different roles, i.e., concatenating content and style embedding as the input of the generator, the network can hardly disentangle any meaningful information for the CelebA dataset, as shown in Figure 2 (a). Our Single C-S Fusion framework can disentangle the pose and identity of human faces, as shown in Figure 2 (c). The content plays the role of modeling the shared distribution. When the shared distribution constraint is removed, i.e., without normalization, the result is shown in Figure 2 (b), where the pose and identity can not be disentangled. For the Multiple C-S Fusion framework, multiple paths are provided, and the network has more flexibility to approximate the target distribution and outperforms the Single C-S Fusion framework, as shown in Figure 2 (d).
Since the shared distribution is crucial, we experiment to demonstrate that better disentanglement can be achieved by choosing a better distribution to fit the dataset. For the real-world dataset CelebA, the distribution of pose is better modeled as a Gaussian distribution. As Figure 4 (a) and (b) show, IN achieves better disentanglement than L2. For the synthetic Chairs (Aubry et al., 2014) dataset, the distribution of pose is close to uniform distribution rather than Gaussian distribution. Figure 4 (c) and (d) show that the L2 normalization results in better identity and pose consistency.
To better understand how our design helps to guide the disentanglement, we visualize the generated images during the training process in Figure 3. As the generated images show, a mean shape of faces is first learned. Then the faces start to rotate, which indicates the pose is disentangled to the content space. After that, the identity features emerge as the style starts to learn parameters for customizing the shared distribution to approximate the real faces distribution.
3.4 LOSS FUNCTION
Perceptual Loss. Perceptual Loss is widely used in weakly supervised and unsupervised methods (Wu et al., 2020; Ren et al., 2020; Wu et al., 2019c). Gabbay & Hoshen (2020) claimed that perceptual loss is not extra supervision for disentanglement. We adopt a VGG (Simonyan & Zisserman, 2015) perceptual loss LP as a reconstruction loss in Eq. 4, implemented by Hoshen et al. (2019).
Instance Discrimination. Instance discrimination can automatically discover appearance similarity among semantic categories (Wu et al., 2018). Inspired by this, we propose to use instance discrimination as a complementary constraint to enhance consistency among the images sharing the same style embeddings. We denote the instance discrimination loss as LID. The implementation detail can be found in Appendix C.3.
Information Bottleneck. Burgess et al. (2018a) propose improving the disentanglement in β-VAE by controlling the capacity increment, i.e., forcing the KL divergence to be a controllable value. This motivated us to control the information bottleneck capacity of content and style to help to avoid leakage. This loss is denoted as LIB. The details of this loss are provided in Appendix C.4. Our full objective is
wPLP + wIBLIB + wIDLID, (5) where hyperparameters wP , wIB , and wID represent the weights for each loss term respectively. The ablation study for the loss terms is presented in Appendix E.
4 EXPERIMENTS
In this section, we perform quantitative and qualitative experiments to evaluate our method on seen data following common practice. We test our method on several datasets: Car3D (Reed et al., 2015), Chairs (Aubry et al., 2014), CelebA (Liu et al., 2015). For details of the datasets, please refer to Appendix B.
Baselines. Among all the prior works, we choose several state-of-the-art class-supervised C-S disentanglement benchmarks for comparisons: Cycle-VAE (Jha et al., 2018), a variant of VAE using cycle-consistency; DrNet (Denton & Birodkar, 2017), an adversarial approach; Lord (Gabbay & Hoshen, 2020), a latent optimization method. We also choose two unsupervised disentanglement methods: FactorVAE (Kim & Mnih, 2018), a method that encourages the distribution of representations to be factorial; Wu et al. (2019c) 1, a two-branch VAE framework based on unsupervised structure learning. More details for baselines are presented in Appendix B.
1There is no open-sourced implementation for it. We modify https://github.com/CompVis/ vunet and provide pseudo ground truth landmarks to the network. Thus it becomes semi-supervised.
4.1 QUANTITATIVE EXPERIMENTS
We compare our method (Multiple C-S Fusion framework) with the baselines on Car3D, Chairs and CelebA.
Content Transfer Metric. To evaluate our method’s disentanglement ability, we follow the protocol of Gabbay & Hoshen (2020) to measure the quality of content transfer by LPIPS (Zhang et al., 2018). Details are presented in Appendix A.1. The results are shown in Table 1. We achieve the best performance among the unsupervised methods, even though pseudo label is provided for Wu et al. (2019c). Furthermore, our method is comparable to or even better than the supervised ones.
Classification Metric. Classification accuracy is used to evaluate disentanglement in the literature (Denton & Birodkar, 2017; Jha et al., 2018; Gabbay & Hoshen, 2020). Following Jha et al. (2018), we train two models of a single fully-connected layer to classify content labels from style embeddings and classify style labels from content embeddings. Low classification accuracy indicates that the leakage between content and style is small. Due to no content annotations of CelebA, we regress the position of the facial landmarks from the style embeddings. The results are summarized in Table 2. Though without supervision, our method is comparable to several methods. We observe that the classification metric is also influenced by information capacity and dimensions of embeddings. For FactorVAE (Kim & Mnih, 2018), the poor reconstruction quality indicates that the latent embeddings encode a very small amount of information that can hardly be classified. The dimensions of the latent vectors of different methods vary from ten to hundreds. Actually, the higher dimension usually leads to easier classification. Based on the above observations, the classification metric may not be appropriate for disentanglement, which is also observed in Liu et al. (2020).
4.2 QUALITATIVE EXPERIMENTS
Disentanglement & alignment. In Figure 5 (a) and (b), we conduct linear interpolation to show the variation in the two latent manifolds. Horizontally, the identity is changed smoothly with the interpolated style latent space while maintaining the pose information. Vertically, the identity remains the same as the pose changes. These results illustrate the following points: 1) The learned content
and style spaces are continuous. 2) Columns of the left and right figures share the same pose, suggesting that the learned content spaces are aligned. 3) Style-related information is maintained when changing the content embedding and vice versa, suggesting the good disentanglement.
We perform retrieval on the content and style latent spaces, respectively. As shown in Figure 5 (c) and (d), the nearest neighbors in the content space share the same pose but have different identities, which reveals the alignment on content space. To better identify the faces, we let the nearest neighbors in the style space share the same pose, and the generated faces look very similar, revealing that the style is well maintained. As shown in Figure 5 (f), one interesting observation is that zero content embeddings lead to a canonical view. As we assume that the pose distribution of faces is N (0, I), the canonical view is the most common pose in the dataset and sampled from the peak of this distribution. We also show the faces with zero style embeddings in Figure 5 (e), and it looks like the mean face of the dataset.
Visual Analogy & Comparison. Visual analogy (Reed et al., 2015) is to switch between style and content embeddings for each pair within a testset. We show the visual analogy results of our method against FactorVAE (Kim & Mnih, 2018) (typical unsupervised baseline) and Lord (strongest supervised baseline) in Figure 8 on Chairs, Car3D, and CelebA. The results of FactorVAE on all datasets have poor reconstruction quality and bad content transfer. On Cars3D, Lord has artifacts (e.g., third column) and could not capture the color style of the test images (e.g., fourth row). On CelebA, the transfer result of Lord has ambiguity, e.g., the content embedding controls facial expression in the fifth column, while other content embeddings do not control expression. Our method achieves comparable pose transfer to Lord and maintains the identity of the images. For more results (including other datasets), please refer to Appendix D.
Comparsion with Image Translation. Starting with our assumption that content embeddings share the same distribution, and leveraging C-S fusion block, we achieve unsupervised content and style disentanglement without needing “swapping” operation and GAN loss constraint to extract the shared content information as image translation works (MUNIT (Huang et al., 2018b) and Park et al. (2020)) do. As shown in Figure 6, for MUNIT (Huang et al., 2018b) and Park et al. (2020), the
content is low-level structure information. While in our case, the content is a high-level semantic attribute of the object, e.g., the pose attribute. As shown in Figure 6 (d), we can also achieve the similar performance to exchange the tone of the images by exchanging the fine style. The fine styles in our method are the style inputs of the last C-S fusion block in the multiple C-S fusion framework.
4.3 UNSEEN IMAGES INFERENCE
Though we learn to disentangle in an unsupervised manner, we may need to process unseen images. An intuitive solution is to train encoders to encode images to the latent spaces. We train style encoder Es and content encoder Ec by minimizing
LE = ‖Es(Ii)− si‖1 + ‖Ec(Ii)− ci‖1. (6)
We apply our model trained on the CelebA dataset to faces collected by Wu et al. (2020) including paintings and cartoon drawings. As shown in Figure 7, our method can be well generalized to unseen images from different domains.
5 NEW APPLICATION
In this work, we explore a new application of C-S disentanglement. For 3D reconstruction, singleview settings lack reliable 3D constraints, which can cause unresolvable ambiguities (Wu et al., 2019a). Thanks to our disentangled representations, we can generate multi-view images from a single view by extracting the style embedding of the single view and then combining it with multiple content embeddings. On Chairs, we adopt Pix2Vox (Xie et al., 2019), a framework for single-view, and multi-view 3D reconstruction to verify the advantages of our method. As shown in Figure 9, the 3D objects reconstructed from multi-view inputs generated from our method are much better than those reconstructed from a single view, and even comparable to those reconstructed from groundtruth multi-view images. For results on Celeba, please refer to Appendix D.3.
6 CONCLUSION
We present an unsupervised C-S disentanglement method, based on an inductive bias: assigning different and independent roles to content and style when approximating the real data distributions. Our method outperforms other unsupervised approaches and achieves comparable to or even better performance than the state-of-the-art supervised methods. We also propose to use it to help single-view 3D reconstruction, as a new application of C-S disentanglement. As for the limitation, we fail on
datasets containing multiple categories with large appearance variation, e.g., CIFAR-10 (Krizhevsky et al., 2009), which does not match our assumption. Our method could be adopted to help downstream tasks, e.g., domain translation, ReID, etc. An interesting direction is to apply our method to instance discrimination. With disentangled representations, contrastive learning is expected to perform more effectively.
B BASELINE DETAILS
For the datasets in the main paper, Car3D contains 183 car models, each rendered from 96 poses. Chairs consists of 1393 chair models, each rendered from 62 poses. CelebA contains 202,599 facial images of 10,177 celebrities.
For the baselines, we use open-sourced implementations for Cycle-VAE (Jha et al., 2018) 2, DrNet (Denton & Birodkar, 2017) 3, Lord (Gabbay & Hoshen, 2020) 4 and FactorVAE (Kim & Mnih, 2018) 5.
For FactorVAE, we traverse the latent space to select the dimensions related to pose as content embedding and treat the other dimensions as style embedding. For Wu et al. (2019c), there is no opensourced implementation. We use the code from https://github.com/CompVis/vunet,
2https://github.com/ananyahjha93/cycle-consistent-vae 3https://github.com/ap229997/DRNET 4https://github.com/avivga/lord-pytorch 5https://github.com/1Konny/FactorVAE
which uses ground truth landmarks as input instead of learning the landmarks unsupervisedly. To achieve the pseudo ground truth landmarks, we use the face detection library (Bulat & Tzimiropoulos, 2017) for Celeba. We try to use the L1 and perceptual loss for all the baselines and select the best.
We split the datasets into training and testing sets. For Celeba, we randomly select 1000 among 10177 celebrities for testing. For Car3D, we randomly select 20 among 183 CAD models for testing. For Chairs, we randomly select 100 among 1393 models for testing. For baselines with group supervision, only the training sets are used for training. For unsupervised baselines and our method, all the datasets are used for training.
C TECHNICAL COMPONENTS
Here we present three technical components that are helpful to the C-S disentanglement. The ablation study for these components is shown in Appendix E.
C.1 LATENT OPTIMIZATION.
In the C-S disentanglement literature, it is common to use encoders to predict embeddings, while latent optimization (Bojanowski et al., 2018; Gabbay & Hoshen, 2020) directly optimizes the embeddings via back-propagating without using encoders. Encoders have a large number of parameters and require a lot more extra effort for training. Therefore, We adopt the latent optimization approach to update the latent spaces directly.
C.2 REPARAMETRIC MODULE
Inspired by VAE (Kingma & Welling, 2014), we design a reparametric module to force the latent space to be continuous. Thus, the embeddings encoding similar information will get closer in the latent space. Assume we have a mean embedding µ with a standard deviation σ, the reparametrized output is σX + µ, where X ∼ N (0, I) . To further simplify the problem, we set σ = 1 following Wu et al. (2019c) and Gabbay & Hoshen (2020). The mean embedding is the input style or content embedding. The reparametric module can make the latent space continuous, which is helpful for backpropagation. Though the training images have discrete identities, the optimized style embedding space is continuous. Different style embeddings of the people with similar appearances are close to each other, as shown in Figure 5 (c).
C.3 INSTANCE DISCRIMINATION LOSS
We first pretrain a ResNet-18 (He et al., 2016) Φ and define a collection of layers of Φ as {Φl}. Among several representative methods (Wu et al., 2018; Ye et al., 2019; He et al., 2020), we observe that the method in Wu et al. (2018) achieves the best performance in our task. Given two images Ii and Ij , we mix the embeddings to generate u = G(R(si), R(cj)) and v = G(R(sj), R(ci)). For samples sharing the same style embedding, we enforce the feature distance in Φ between them to be close. This loss term can be written as
LID = ∑ l λl(‖Φl(u)− Φl(x)‖1 + ‖Φl(v)− Φl(y)‖1), (7)
where x = G(R(si), R(ci)) and y = G(R(sj), R(cj)). The hyperparameters {λl} balance the contribution of each layer l to the loss. {λl} are set to be [1, 1, 1, 1, 1]
C.4 INFORMATION BOTTLENECK
Similar to Anneal VAE (Burgess et al., 2018a), we introduce a information bottleneck given by
LIB = γs‖s2 − Cs‖1 + γc‖c2 − Cc‖1 (8)
where Cs and Cc are the information capacity controlling the amount of information of the content and style respectively. During training,Cs andCc increase linearly. The rate of increase is controlled by the increase steps and the maximum value. By controlling the increase rate, the content is forced
to encode information first, so that the learning process is more consistent with our assumptions about the data: the shared conditional variable c is learned first.
For the information bottleneck, by taking the training process of the model without the information bottleneck as a reference, we determine the increase steps and the maximum of the information capacity Cc and Cs. We can enhance the model inductive bias by tuning these parameters. For Chairs, we set the maximum of Cc to 5, the start value of Cc to 2, the increase steps of Cc to 1.4× 105, γc to 1 and γs to 0. Note that our model achieves state-of-the-art performance on Chairs even without information bottleneck.
D MORE RESULTS
In this section, we demonstrate more qualitative comparison and more qualitative results (including more datasets).
D.1 MORE QUALITATIVE EXPERIMENTS
In the main paper, for unsupervised baselines, we only compare our method with FactorVAE (Kim & Mnih, 2018) limited to space. As shown in Figure 11, we also outperform Wu et al. (2019c). For Wu et al. (2019c), the disentanglement is poor, such that the content embeddings control almost all the factors while the style embeddings control the tone.
For datsets in the main paper, We provide more qualitative results in Fig. 24, 25, 26, 27 and 28. Moreover, we also apply our method on higher resolution images and achieve good performance, as shown in Figure 20.
D.2 MORE DATASETS
Besides the datasets introduced in the main paper, we make additional experiments on other datasets: such as MNIST (LeCun et al., 2010), Cat (Parkhi et al., 2012; Zhang et al., 2008), Anime (Chao, 2019) and Market-1501 (Zheng et al., 2015). MNIST has 70k examples for 10 handwritten digits. Cat has 1.2k cat head images. Anime contains 63,632 anime faces. Market-1501 have 25,259 images. The results are shown in Figure 21, 22 ,23. Furthermore, we show our results on the Market-1501 dataset in Figure 19, which demonstrates our method can disentangle the human pose and the appearance even though the skeletons have large variances.
D.3 MORE 3D RECONSTRUCTION
Our setting treats every image as a single identity (style) without ambiguity for augmenting singleview images. On Celeba, We use MVF-Net (Wu et al., 2019a) based on multi-view to reconstruct 3D facial shapes. For a given image, we can get the corresponding style embedding content embedding. Then we can get the front, left, and right view of this image combining the extracted style embedding and prepared content embeddings 6. As shown in Figure 13, our augmented multi-view images are consistent, and the 3D meshes based on our method are more accurate than those based on Lord.
E MORE ABLATION STUDY
Here we perform more ablation study for the technical modules.
If we use an amortized scheme instead of a latent optimization scheme, there are leaks between style and content latent space, and the result is worse than latent optimization, as shown in Figure 12 (a) and (c). Furthermore, if we do not use a reparametric module, we find the reconstruction performance is worse, as shown in Figure 12 (b). For the instance discrimination loss, the comparison is shown in Table 4. The disentanglement is better with an instance discrimination loss. For the information bottleneck, as shown in Table 3, the result with an information bottleneck is much better than the one without it.
F COMPARISON WITH SELECTED RELATED WORK
Comparison with StyleGAN. In our framework, the optimized content (conv) and style embeddings are disentangled representations of corresponding images. While StyleGAN (Karras et al., 2019) keeps the input of the convolution branch as a learnt constant for the whole dataset and finds the feature space of the “style” branch has disentanglement ability. For StyleGAN2 (Karras et al., 2020) 7, we select the subset of “style”, which represents pose, as the content embedding and the rest subset as the style embedding. As shown in Figure 15, StyleGAN2 entangled pose with other semantic attributes, such as hair and glasses. As shown in Figure 28, the content of our method on human faces is pose attribute without entanglement.
Comparsion with MUNIT & Park et al. (2020). Starting with our assumption that content embeddings share the same distribution, and leveraging AdaIN-like operation, we achieve unsupervised content and style disentanglement without needing “swapping” operation and GAN loss constraint to extract the shared content information as image translation works (MUNIT (Huang et al., 2018b) and Park et al. (2020)) do. As shown in Figure 14, for MUNIT (Huang et al., 2018b) and Park et al. (2020), the content is low-level structure information. While in our case, the content is a high-level semantic attribute of the object, e.g., the pose attribute. As shown in Figure 14 (d), we can also achieve the similar performance to exchange the tone of the images by exchanging the fine style.
7We use the implementation from https://github.com/rosinality/stylegan2-pytorch.
The fine styles in our method are the style inputs of the last C-S fusion block in the multiple C-S fusion framework.
G CROSS-DOMAIN APPLICATION
As shown in the main paper, the content and style are disentangled in a single domain. Based on our assumption, the cross-domain dataset also can be disentangled. In this section, we test our model on a cross-domain dataset to further verify our assumption. In some cases that we merge images from two domains, our method can still work and achieve performance, which is similar to domain translation. For example, Edges2Shoes (Yu & Grauman, 2014) is a dataset consisting of 50k paired shoe and edge map images. As shown in Figure 16, the content is edge structure, and the style is texture. Thanks to this, we can translate edge images into shoe images and vice versa without any additional operation.
Furthermore, once the domain labels are given, we can disentangle and align the cross-domain dataset. This experiment may be helpful for domain transfer and domain adaptation. We train our model on the dataset that consists of Celeba and Anime. The model needs to be modified for learning cross-domain data: concatenate the domain embedding and the style embedding, take it as the style embedding in the original model, and optimize the domain embedding during latent optimization. The results are shown in Figure 17. The learned poses are well aligned both in the animation and reality domain.
H PROOF
Our optimization target is to minimize the KL divergence between P and Q,
min θ,ci,si N∑ i=1 KL(Pi(x|c = ci)||Qθ,si(x|c = ci)). (9)
Expanding the above KL term, we have,
min θ,ci,si N∑ i=1 ∫ x Pi(x|c = ci) log Pi(x|c = ci) Qθ,si(x|c = ci) dx. (10)
The above integral equation cannot be directly calculated, but can be estimated by the sampled images {Ij} ∼ Pi,
min θ,ci,si N∑ i=1 ∑ Ij∼Pi Pi(x = Ij |c = ci) log Pi(x = Ij |c = ci) Qθ,si(x = Ij |c = ci) . (11)
Separating P and Q from the above equation by logarithmic transformation, we have
min θ,ci,si N∑ i=1 ∑ Ij∼Pi Pi(x = Ij |c = ci) logPi(x = Ij |c = ci)
− Pi(x = Ij |c = ci) logQθ,si(x = Ij |c = ci).
(12)
Since Pi(x = Ij |c = ci) is the dataset distribution, which is an unknown constant distribution, therefore, the first term is a constant, the optimization target is equivalent to
max θ,ci,si N∑ i=1 ∑ Ij∼Pi Pi(x = Ij |c = ci) logQθ,si(x = Ij |c = ci). (13)
Rewriting it into mathematical expectation form, we have
max θ,ci,si N∑ i=1 EIj∼Pi logQθ,si(x = Ij |c = ci), (14)
where Pi refers to Pi(x = Ij |c = ci). Our optimization target is equivalent to maximum likelihood estimation. Here we assume Q is a Gaussian distribution,
Qθ,si(x|c = ci) = 1√ 2πσ exp
( − 1
2σ2 ‖x−Gθ(si, ci)‖22
) . (15)
Combining Eq. 14 and Eq. 15, we have
max θ,ci,si N∑ i=1 ( − 1 2σ2 ‖Ii −Gθ(si, ci)‖22 ) . (16)
Consequently, the final optimization target is
min θ,ci,si N∑ i=1 ‖Ii −Gθ(zi)‖22. (17)
Q.E.D. | 1. What is the focus of the paper in terms of image processing?
2. What are the strengths of the proposed approach, particularly in its architecture and motivation?
3. What are the weaknesses of the paper, especially regarding its comparisons with other works?
4. Do you have any questions or concerns about the experimental evaluation?
5. Are there any recent works that the reviewer believes should be compared with the proposed method? | Review | Review
The paper addresses the problem of unsupervised content style disentanglement. To this end, a content vector and a style vector is sampled from a given prior distribution. The style vector is decomposed to mean and std which is applied on the content code in the same manner as in AdaIN (C-S Fusion block).
Pros:
The paper is well written and clear, providing a clear formulation of the problem and the proposed architecture.
The method provided is well motivated and clear.
The experiments demonstrate some improvement on state of the art. Some ablation is performed on the use of normalization layer, showing its effect on disentanglement. A variety of datasets are considered, and both generation quality (LPIPS) and disentanglement (classification accuracy) are numerically evaluated. Overall the experimental evaluation is extensive and show some improvement over a number of baselines as well as a new application in 3D generation.
Cons:
To me the C-S fusion block is essentially the application of AdaIn from the style vector to the content vector (hence applying a change of style). Modeling the content vector as a shared parameter c and the style vector as an image-specific has been introduced in MUNIT before (and in other works). Other than architectural specific choices, the difference to MUNIT is the application of this framework in conjunction with GLO instead of using and style encoder and a content encoder. Even though MUNIT was designed to work with class level supervision, the method suggested seems very similar (other then the use of GLO instead of encoders) and it might be the case that MUNIT would therefore perform similarly. I therefore believe a comparison to MUNIT when both A and B are the same dataset (e.g celebA) would provide a better insight.
Image2StyleGAN [1] and StyleGANv2 [2] are also able to disentangle content and style. It would be interesting to compare their method to the one suggested. The very latest work tackling this problem is that of Swapping Autoencoders [3] but this has only been recently published.
Some concerns regarding the experiments: In table 2, the results of disentanglement are worse that that FactorVAE. Why is this the case? In addition, no comparison to baselines is provided for the Figure 7 - visual analogies on the Market-1501 dataset.
[1] Image2stylegan: How to embed images into the stylegan latent space? In: IEEE International Conference on Computer Vision (ICCV) (2019)
[2] Analyzing and Improving the Image Quality of StyleGAN. Karras et al.,
[3] Swapping Autoencoder for Deep Image Manipulation. NeurIPS 2020. |
ICLR | Title
Learning by shaking: Computing policy gradients by physical forward-propagation
Abstract
Model-free and model-based reinforcement learning are two ends of a spectrum. Learning a good policy without a dynamic model can be prohibitively expensive. Learning the dynamic model of a system can reduce the cost of learning the policy, but it can also introduce bias if it is not accurate. We propose a middle ground where instead of the transition model, the sensitivity of the trajectories with respect to the perturbation (shaking) of the parameters is learned. This allows us to predict the local behavior of the physical system around a set of nominal policies without knowing the actual model. We assay our method on a custom-built physical robot in extensive experiments and show the feasibility of the approach in practice. We investigate potential challenges when applying our method to physical systems and propose solutions to each of them. (a) (b) (c) (d) Figure 1: Physical finger platform in action with different policies.
1 INTRODUCTION
Traditional reinforcement learning crucially relies on reward(Sutton & Barto, 2018). However, reward binds the agent to a certain task for which the reward represents success. Aligned with the recent surge of interest in unsupervised methods in reinforcement learning (Baranes & Oudeyer, 2013; Bellemare et al., 2016; Gregor et al., 2016; Hausman et al., 2018; Houthooft et al., 2016) and previously proposed ideas (Schmidhuber, 1991a; 2010), we argue that there exist properties of a dynamical system which are not tied to any particular task, yet highly useful, and their knowledge can help solve other tasks more efficiently. This work focuses on the sensitivity of the produced trajectories of the system with respect to the policy so called Physical Derivatives. The term physical comes from the fact that it uses the physics of the system rather than any idealized model. We learn a map from the directions in which policy parameters change to the directions in which every state of the trajectory changes. In general, our algorithm learns the Jacobian matrix of the system at every time step through the trajectory. The training phase consists of physically calculating directional derivatives by the finite difference after applying perturbed versions of a nominal policy (a.k.a. controller). Perturbing the parameters of the controller is the reason for naming our method shaking. The test phase uses these directional derivatives to compute derivatives along unseen directions. Due to the difficulty of computing the Jacobian matrix by the finite difference in higher dimensions, we use random controllers joint with probabilistic learning methods to obtain a robust estimate of the Jacobian matrix at each instant of time along a trajectory. We are capable of this generalization to unseen perturbations because the trajectories of physical systems live on an intrinsic low-dimensional manifold and change slowly with the small changes in the parameters of the system (Koopman, 1931). This assumption holds as long as the system is not chaotic or close to a bifurcation condition (Khalil, 2002).
1.1 PRELIMINARIES
A reward function describes how close the agent is to the solution of the target task. In the absence of the reward, the agent will be given no means to find its way towards the solution. Let x ∈ X ⊆ Rd be a d-dimensional state vector that fully describes the environment with which the agent interacts. At each state, the agent is allowed to take action u ∈ U ⊆ Rq from a q-dimensional action space via a parameterised policy function u = π(x;θ). The agent will be rewarded r(x,u) by the function r : X × U → R when it takes action u at state x. The goal of learning is to update θ such that some desired target is achieved. The target can be anything as long as a concrete reward function is associated with it. In stochastic cases, return R : Π(Θ) → R is defined as a cumulative future discounted reward whose expectation is often of main interest. For parametric policies, the space of feasible parameters Θ has a one-to-one correspondence to the policy space Π. The agent who takes on the policy π from state x0 produces the trajectory T ∈ T where T is the space of possible trajectories. For a return function R : T→ R, the expected return becomes a function of the policy as J(πθ) = ET {R(T )} where the expectation is taken with respect to the probability distribution P (T |πθ). There exist two major classes of approaches in reinforcement learning: value-based methods and value-free methods. In the first class, a surrogate function is defined to approximate the value of either a state V (x) or a state-action pair Q(x,u). The policy is updated such that the agent tends towards states with higher values. The value-free methods update the policy directly without any need for an auxiliary function such as V or Q. This paper mainly concerns the second class. The policy parameters are updated as
θt+1 = θt + α ∂J(πθ)
∂θ ∣∣∣∣ θ=θt
(1)
and the gradient ∂J(πθ)/∂θ is written as
∂J(πθ)
∂θ = ∫ T ∂p(T |πθ) ∂θ R(T ) dT (2)
which is normally difficult to compute in practice. As can be seen in eq. (2), the integrand of the r.h.s. consists of two terms. The second term R(T ) is the return which is defined according to the target task. Hence, this term is task-dependent. The first term ∂p(T |πθ)/∂θ though shows how the trajectories change with respect to a change in the policy. Notice that there is no notion of reward or any task-dependent quantities in this term. For an empirical distribution pe(T |π) = 1 M ∑M i=1 δ(T − T (i)), the dependence of partial derivative of the distribtion of T on the partial derivative of T can be explicitely derived as
∂pe(T |πθ) ∂θ = 1 M M∑ i=1 u1(T − T (i)) ∂T ∂θ
(3)
where u1 is the unit doublet function (derivative of the Dirac delta function). This examplary distribution makes it clear that the change in the distribution of trajetories relates to the change of the trajectories themselves. As an unsupervised object, ∂T /∂θ is of main interest in this paper.
1.2 PHYSICAL DERIVATIVE
In this paper, we investigate the feasibility of learning a less explored unsupervised quantity, the so called Physical Derivative which is computed directly from the physical system. In abstract terms, we perturb the policy and learn the effect of its perturbation on the resulting trajectory. The difference from traditional RL whose algorithms are based on eq. (1) is the absence of a specified reward function. Instead, we generate samples from ∂p(T |πθ)/∂θ of eq. (2) that makes it possible to compute ∂J(πθ)/∂θ for an arbitrary return function R. If the exact model of the system is known, control theory has a full set of tools to intervene in the system with stability and performance guarantees. When the system is unknown, one could identify the system as a preliminary step followed by a normal control synthesis process from control theory (Ljung, 2001). Otherwise, the model and the policy can be learned together in a model-based RL (Sutton, 1996) or in some cases adaptive control (Sastry & Bodson, 2011). We argue that learning physical derivatives is a middle ground. It is not model-based in the sense that it does not assume knowing the exact model of the system. Rather, it knows how the trajectories of the system change as a result of perturbing the policy
parameters. This differential information of the system has applications in many downstream tasks. This work focuses on the concept and introduction of physical derivatives and direct applications would go significantly beyond the scope of this work. Few potential applications are discussed with more details in appendix C.
Our contributions— In summary, the key contributions of the current paper are as follows:
• A method to generate training pairs to learn the map from the policy perturbations to the resulting changes in the trajectories.
• Learning the above map as a probabilistic function and showing that it generalizes to unseen perturbations in the policy.
• Use the inverse of the above map to perturb the policy in the desired direction to achieve certain goals without conventional RL methods.
• Use a physical custom-built robotic platform to test the method and propose solutions to deal with the inherent issues of the physical system to ensure the practicality of the method (see fig. 1 for images of the platform and and appendix A for technical details).
• The supplementary materials for the paper, including code and the videos of the robot in action can be found in https://sites.google.com/view/ physicalderivatives/
2 METHOD
In this section, we describe our pipeline to estimate the physical derivatives and our proposed solutions to the inevitable challenges that are likely to occur while working with a real physical robot. We are interested in ∂T /∂θ which denotes how a small change in the parameters θ of the controller results in a different trajectory produced by the system. We normally consider a finite period of time [0, T ] and the trajectory is an ordered list of states T = [x0,x1, . . . ,xT ] where the subscript shows the time step. Therefore, having ∂T /∂θ is equivalent with having ∂xt/∂θ for every t ∈ {1, . . . , T}. Notice that the initial state x0 is chosen by us. Hence we can see it either as a constant or as a changeable parameter in θ. We kept it fixed in our experiments.
Assume xt ∈ Rd and θ ∈ Rm. Hence,∇θxt = ∂xt/∂θ ∈ Rd×m where the tth row of this matrix is ∇θxit = (∂xit/∂θ)T ∈ Rm showing how the ith dimension of the state vector changes in response to a perturbation in θ. The directional derivative of xit in the direction δθ is defined as
∇δθθ xit = 〈∇θxit, δθ
|δθ| 〉. (4)
If (4) is available form linearly independent and orthonormal directions, {δθ(1), δθ(2), . . . , δθ(m)}, the directional derivative along an arbitrary δθ can be approximated by
∇δθθ xit = m∑ j=1 cj〈∇θxit, δθ(j)〉 (5)
where cj = 〈δθ, δθ(j)〉 is the coordinates of the desired direction in the coordinate system formed by the orthonormal bases.
In practice, m directions δθ(j) can be randomly chosen or can be along some pre-defined axes of the coordinate system. To compute 〈∇θxit, δθ(j)〉, the nominal policy parameters θ are perturbed by δθ(j) as θ(j) ← θ + δθ(j) and the derivative is computed as
〈∇θxit, δθ(j)〉 = lim h→0 xit(θ + hδθ (j))− xit(θ) h . (6)
This quantity is often approximated by finite difference where h takes a small nonzero value. By perturbing the parameters θ along m orthonormal directions δθ(j) and computing the approximate directional derivative by (6), ∇δθθ xit can be computed along every arbitrary direction δθ, meaning that, we can compute∇θxit by evaluating it along any direction which is the aim of this paper.
✓1 <latexit sha1_base64="b2Ff/oUFJw0eznxXS1RygRK2bZk=">AAAB73icbVBNS8NAEJ3Ur1q/qh69LBbBU0mqoMeiF48V7Ae0oWy2m3bpZhN3J0IJ/RNePCji1b/jzX/jts1BWx8MPN6bYWZekEhh0HW/ncLa+sbmVnG7tLO7t39QPjxqmTjVjDdZLGPdCajhUijeRIGSdxLNaRRI3g7GtzO//cS1EbF6wEnC/YgOlQgFo2ilTg9HHGnf65crbtWdg6wSLycVyNHol796g5ilEVfIJDWm67kJ+hnVKJjk01IvNTyhbEyHvGupohE3fja/d0rOrDIgYaxtKSRz9fdERiNjJlFgOyOKI7PszcT/vG6K4bWfCZWkyBVbLApTSTAms+fJQGjOUE4soUwLeythI6opQxtRyYbgLb+8Slq1qndRrd1fVuo3eRxFOIFTOAcPrqAOd9CAJjCQ8Ayv8OY8Oi/Ou/OxaC04+cwx/IHz+QPOtY/Q</latexit>
✓2 <latexit sha1_base64="oe3DagNbCs6bjj10ybLfZH9d5SY=">AAAB73icbVBNS8NAEJ3Ur1q/qh69LBbBU0mqoMeiF48V7Ae0oWy2m3bpZhN3J0IJ/RNePCji1b/jzX/jts1BWx8MPN6bYWZekEhh0HW/ncLa+sbmVnG7tLO7t39QPjxqmTjVjDdZLGPdCajhUijeRIGSdxLNaRRI3g7GtzO//cS1EbF6wEnC/YgOlQgFo2ilTg9HHGm/1i9X3Ko7B1klXk4qkKPRL3/1BjFLI66QSWpM13MT9DOqUTDJp6VeanhC2ZgOeddSRSNu/Gx+75ScWWVAwljbUkjm6u+JjEbGTKLAdkYUR2bZm4n/ed0Uw2s/EypJkSu2WBSmkmBMZs+TgdCcoZxYQpkW9lbCRlRThjaikg3BW355lbRqVe+iWru/rNRv8jiKcAKncA4eXEEd7qABTWAg4Rle4c15dF6cd+dj0Vpw8plj+APn8wfQOY/R</latexit>
✓1 <latexit sha1_base64="b2Ff/oUFJw0eznxXS1RygRK2bZk=">AAAB73icbVBNS8NAEJ3Ur1q/qh69LBbBU0mqoMeiF48V7Ae0oWy2m3bpZhN3J0IJ/RNePCji1b/jzX/jts1BWx8MPN6bYWZekEhh0HW/ncLa+sbmVnG7tLO7t39QPjxqmTjVjDdZLGPdCajhUijeRIGSdxLNaRRI3g7GtzO//cS1EbF6wEnC/YgOlQgFo2ilTg9HHGnf65crbtWdg6wSLycVyNHol796g5ilEVfIJDWm67kJ+hnVKJjk01IvNTyhbEyHvGupohE3fja/d0rOrDIgYaxtKSRz9fdERiNjJlFgOyOKI7PszcT/vG6K4bWfCZWkyBVbLApTSTAms+fJQGjOUE4soUwLeythI6opQxtRyYbgLb+8Slq1qndRrd1fVuo3eRxFOIFTOAcPrqAOd9CAJjCQ8Ayv8OY8Oi/Ou/OxaC04+cwx/IHz+QPOtY/Q</latexit>
✓2 <latexit sha1_base64="oe3DagNbCs6bjj10ybLfZH9d5SY=">AAAB73icbVBNS8NAEJ3Ur1q/qh69LBbBU0mqoMeiF48V7Ae0oWy2m3bpZhN3J0IJ/RNePCji1b/jzX/jts1BWx8MPN6bYWZekEhh0HW/ncLa+sbmVnG7tLO7t39QPjxqmTjVjDdZLGPdCajhUijeRIGSdxLNaRRI3g7GtzO//cS1EbF6wEnC/YgOlQgFo2ilTg9HHGm/1i9X3Ko7B1klXk4qkKPRL3/1BjFLI66QSWpM13MT9DOqUTDJp6VeanhC2ZgOeddSRSNu/Gx+75ScWWVAwljbUkjm6u+JjEbGTKLAdkYUR2bZm4n/ed0Uw2s/EypJkSu2WBSmkmBMZs+TgdCcoZxYQpkW9lbCRlRThjaikg3BW355lbRqVe+iWru/rNRv8jiKcAKncA4eXEEd7qABTWAg4Rle4c15dF6cd+dj0Vpw8plj+APn8wfQOY/R</latexit>
Figure 2: Gaussian (left) and uniform (right) shaking examples.
In the matrix form for x ∈ Rd, we can compute ∇δθ(j)θ x = [∇δθ (j) θ x1,∇δθ (j) θ x1, . . . ,∇δθ (j) θ xd] T in a single run by computing (6) for all d dimensions of the states. Let’s define
∆θx , [∇δθ (1) θ x,∇δθ (2) θ x, . . . ,∇δθ (m) θ x] (7)
where ∆θx ∈ Rd×m and let Λ = [δθ(1), δθ(2), . . . , δθ(m)]. Therefore, if ∆δθθ x shows the directional derivative of x along δθ, we can write it as:
∇δθθ x = ∆θx(ΛTδθ) (8)
which is only a vectoral representation of eq. (4). Even though the linear formula of eq. (8) requires only m directional derivatives, it has two major downsides. First, it does not give a clear way to incorporate more than m training directional physical derivatives. Second, the linear approximation remains valid only for very small δθ. We propose Gaussian Process (GP) as a nonlinear probabilistic function approximator (Rasmussen, 2003) to capture the maps ĝt defined as
ĝt : Θ→ X (9) ĝt(δθ) = δx (10)
where subscript t shows the function that maps δθ to the change of the states δxt at time step t. We considered distinct functions for every time step. Taking into account the commonality among the function approximators corresponding to different time steps is deferred to future research. Learning this map requires training data that comes from an initial data collection phase called shaking. Shaking refers to perturbing parameters of the controller to obtain the set of trajectories produced by the perturbed controllers.
The perturbation can be either regular or stochastic. Stochastic perturbations have the advantage over regular perturbations that the agent does not need to be worried about perturbing the parameters in a particular direction. Besides, in some cases, perturbing the parameters of the policy in certain directions is infeasible. We propose two methods of shaking called Gaussian and Uniform shaking.
Gaussian shaking— Likely values of θ create nominal policies encoded by {θ(1),θ(2), . . . ,θ(m)}. We put Gaussian distributions centered at each of the nominal values resulting in a mixture of Gaussians. To reduce the hyper-parameters, we assume the variances of the Gaussians are themselves sampled from an exponential distribution making sure they all take positive values (See fig. 2 left). Here, we manually choose a reasonable value for the rate parameter of the exponential distribution. Doing inference on the hyper-parameters of the sampling distributions can be a topic for future research especially in active learning for a more clever less costly sampling stratgey.
Uniform shaking— In this setting, the state space of the changeable parameters of the policy is discretized and a uniform distribution is assumed around each value of this grid with some overlapping with the neighboring cells (See fig. 2 right).
We show the effect of each of these sampling methods later in section 4. We observed that the results are less sensitive to the hyper-parameters of the uniform sampling than Gaussian sampling. A carelessly chosen rate for the exponential distribution that generates the variances of the Gaussians in Gaussian sampling can result in too local or global sampling that gives rise to a large variance or bias in the estimated gradients.
3 REAL WORLD CHALLENGES
In this section, we present two major low-level challenges that are common when dealing with physical systems. There exist inherent noise and imperfection in the system that results in a change in the produced trajectories while the policy parameters are kept fixed. In our finger platform, we observed two different major sources of noise which are likely to occur in other physical systems too. We call them temporal and spatial noise for the reasons that come in the following.
Temporal noise. The temporal noise represented by n affects trajectories by shifting them in time xt ← xt + n for t = 0, 1, . . . , T. (11)
Notice that the absence of subscript t in n shows that this noise is not time-dependent, i.e., the time shift does not change along the trajectory as time proceeds.
Spatial noise. The trajectories affected by spatial noise cannot be aligned with each other by shifting forward or backward in time. We can model this noise as a state-dependent influence on the state of the system at every time step.
xt ← xt + nxt (12)
The following definition makes the distinction more concrete. Definition 1. Consider two trajectories T (1)(t) and T (2)(t) as two temporal signals. Assume St◦ is the shift-in-time operator defined as
St◦T (t) = T (t+ t◦) (13) for an arbitrary function of time T (t). We say T (2)(t) is temporally noisy version of T (1)(t) if
∃t◦ ∈ R s.t. ‖T (2) − St◦T (1)‖1 ≤ (14) where is a hyper-parameter threshold that reflects our prior confidence about the accuracy of the motors, joints, physical and electrical elements (in general construction process) of the robot. On the other hand, T (2) is called a spatially noisy version of T (1) if
@t◦ ∈ R s.t. ‖T (2) − St◦T (1)‖1 ≤ (15)
3.0.1 SOLUTION TO TEMPORAL NOISE
Fortunately, this type of noise is not state-dependent by definition. If we find out how much a trajectory is shifted in time with respect to another trajectory, we can simply shift the trajectory for those many time steps and compensate for the delay. Hence, the problem becomes detecting the lagged trajectories with respect to a reference trajectory and also estimate the amount of the required time shift to compensate for the delay. We can either use physical landmarks in the trajectories to align them or use the correlation between them as a measure of alignment. The later gave better results, hence, we postpone the description of the former to the appendix D.1.
Correlation-based delay estimation In this method, we use the correlation between zero-meaned trajectories T (i) and T (j) to check if one is the lagged version of the other one. The delay τ is found by
τ∗ = argmax τ T−τ∑ t=0 〈Sτx(i)t ,x(j)t 〉 (16)
where Sτ is a shift-operator by τ ∈ Z time steps. In practice, we take one trajectory of {T (1), T (2), . . . , T (M)}, e.g. T (r) as the reference and synchronize other trajectories with respect to it using eq. (16). The trajectories must be initially normalized to avoid trivial solutions where every trajectory is pushed towards the larger parts of the reference trajectory. For illustrative purposes, the plots of fig. 14 show a sample of the lagged trajectory from the finger platform and its correction by the above method.
3.1 SOLUTION TO SPATIAL NOISE
The spatial noise can be a stochastic function of the actuator, environmental change, and electronic drivers. In a perfect model of the transition dynamics xt+1 = f(xt,ut), applying the same control sequence {u0,u1, . . . ,uT−1} always results in the same sequence of states {x1,x2, . . . ,xT } when it starts from the same initial state x0. This assumption is often violated in physical systems as different runs of the same policy may result in different trajectories as can be seen in fig. 10 in the Appendix. The noise in the dynamics can be any function of states, input, and time. Therefore, it is difficult to model this noise since it requires a prohibitively large number of random experiments. The good news is that if the physical system is built properly, the effect of this noise is expectedly low. Based on our observations from the finger platform, we can assume the following.
Assumption 2. Limit on the physical noise: Let’s the control sequence U = {u0,u1, . . . ,uT−1} be applied to the system M times resulting in multiple sequence of states T (1), T (2), . . . , T (M). There exists a relatively small ζ such that
‖T (i) − T (j)‖∞ ≤ ζ for every i, j ∈ {1, 2, . . . ,m}. (17)
The word relatively here means that the change of the trajectory due to the inherent physical noise of the system must be small compared to the change of the trajectories when the parameters of the policy are perturbed.
To reduce the sensitivity of the estimated gradient to this unwanted spatial noise, we divide the state space of the physical system into regularly located adjacent cells called voxels. Each voxel vox(c) is represented by its center c and is defined as
vox(c) = {x ∈ X | ‖x− c‖∞ ≤ γ} (18) where γ is the parameter of the voxelization. The concept of the voxel is roughly used as a superstate. Every state that ends up within vox(c) gives rise to the same superstate. After recording the trajectories form the robot, every state is mapped to the center of the voxel it belongs to as
c← x for x ∈ vox(c) (19) After voxelization, we work with c instead of x. For example, all the gradients of (7) are computed as ∇θc rather than ∇θx. To illustrate the positive effect of voxelization of the state space, it can be seen in fig. 3 that increasing the voxel size improves the overlapping between two trajectories that deviate from each other due to the inherent spatial noise of the system not because of perturbing the parameters of the policy, but because of the inherent imperfection of the mechanical and electrical components of the system. This benefit comes with a cost which is the error introduced by voxelization. Fortunately, this error is bounded due to the following lemma Lemma 3. The error caused by voxelization is bounded and inversely proportional to the size of each voxel (see appendix F.1 for a brief proof).
After dealing with the challenge of inherent noise, we pursue the main goal of this paper which is estimating ∂T /∂θ directly from the physical system. In the following, we investigate the use of the different type of controllers to emphasize the extent of applicability of the proposed method.
4 EXPERIMENTS
In this section, we show how physical derivatives can be estimated in practice through several experiments. Notice that our work is different from computing gradients around the working point of a system by finite-difference. We aim to collect samples from such gradients by perturbing a grid of nominal values of the policy parameters and then generalize to unseen perturbations by Gaussian process as a probabilistic regression method. The experiments are designed to show each challenge separately and the efficacy of our proposed solution to it. Due to space constraints, details to the physical platform can be found in section A in the Appendix. See1 for videos of the robot while collecting data for different experiments and more backup materials.
1https://sites.google.com/view/physicalderivatives/
m 3
T = 10 T = 40 T = 80 T = 200
T = 200 T = 400 T = 600 T = 800
4.1 LINEAR OPEN-LOOP CONTROLLER
As a simple yet general policy, in this section, we consider an open-loop controller which is a linear function of time. The policy ut = [u1t, u2t, u3t] constitutes the applied torques to the three motors {m1,m2,m3} of the system and is assigned as
uit = wit+ bi for i = 1, 2, 3 (20)
Notice that the torque consists of two terms. The first term wit grows with time and the second term remains constant. The controller has 6 parameters in total denoted by θ. The task is to predict∇θxt for every t along the trajectory. In the training phase, the training data is obtained via shaking as described in section 2.
fig. 7 shows examples of nominal trajectories + trajectories produced by the perturbed controller and the computed derivatives. The arrows are plotted as they originate from the perturbed trajectories only for easier distinction. Each arrow corresponds to the change of the states at a certain time step on the source trajectory as a result of perturbing the policy. Each figure corresponds to a pair of nominal values of {w, b} for the linear open-loop controller. See fig. 29 for examples.
4.2 NONLINEAR OPEN-LOOP CONTROLLER
Physical derivatives can naturally be computed for either linear or nonlinear controllers which makes it different from taking the gradient of models through time. In model-based methods, if the model’s transition dynamics is not differentiable, taking the derivative is theoretically challenging. However, our method is taking advantage of the real physics of the system to compute the gradients regardless of whether the approximating model is differentiable or not. To elaborate more on this, we test our method for a simple but nonlinear policy, i.e., ut = A sin(ωt). The sinusoidal torque is applied
to either one or two motors of the system to investigate the performance of our method. We tested Gaussian and uniform shaking for θ = {A, ω} as parameters of this controller. The GP interpolation for the partial derivatives at some time instances along the trajectory can be seen in fig. 6 and more extensively in figs. 16 to 18 in the Appendix. One might be interested in the direction of the predicted derivative instead of its exact size. To this end, we take several test perturbations for every time step and use cos(α) as a measure of alignment between the predicted and ground-truth derivative vectors. The time evolution of the histogram of this measure along the trajectory shows a better alignment as time proceeds. This effect can be seen in figs. 27 and 28. This confirms our observation of initial transient noise in the system that dies out gradually by the progression of time. The overall performance of our method in predicting physical derivatices in unseen directions for two different shaking methods is shown in appendix E.
4.3 FEEDBACK CONTROLLER
Often in practice, the policy incorporates some function of the states of the system. Some wellknown examples which have been extensively used in control applications are P, PD, PI and PID controllers. Here, we consider two member of this family, i.e., P and PD controllers. The policy becomes u = Kpe for P controllers and u = Kpe + Kdė for PD controllers. The error e shows the difference between the current state x and the desired state x∗. The parameters of the controller {Kp,Kd} are scalar values that are multiplied by the error vector element wise. This implies that the controller parameters are the same for three motors leaving the controller of the whole platform with two parameters that weights the value and the rate of the error. We applied the uniform and Gaussian shaking for the set of parameters θ = {Kp,Kd} with different scenarios. The GP interpolation for the physical derivatives at some time instances along the trajectory can be seen in fig. 6 and more extensively in figs. 19 to 24 in the Appendix. The time evolution of the histogram of misalignment between predicted and ground-truth directional derivatives (see figs. 25 and 28 in the appendix) once again confirms the existence of the initial transient noise as was also observed in the section 4.2. Similar to the sinusoidal experiment, the overall performance of our method is presented in appendix E.
m 3
4.4 ZERO-SHOT PLANNING TASK
Our previous experiments in sections 4.1,4.2 and 4.3 showed that learning the physical derivative map is feasible for various types of controllers. In this section, we demonstrate an example of a constrain satisfaction task by means of the physical derivative map. In this experiment, the superscript (s) corresponds to the nominal trajectory which is called source. Assuem the system is controlled by a PD controller to reach a target state x∗, i.e., the control torques are designed as u = k
(s) p (x − x∗) + k(s)d ẋ. The controller does a decent job to reach the target state given rea-
sonable values for kp and kd. However, such controller does not give us a clear way to shape the trajectory that starts from x◦ and ends at x∗. Assume it is desired that the nominal controlled trajectory T (s) passes through an intermediate state x∗t at time t on its way towards the target state x∗ (we can equally assume that the system must avoid some regions of the state space because of safety reasons). The solution with physical derivatives is as follows . Assume k(s)d is fixed and only k(s)p is changeable. If the physical derivatives map is available, we have access to ĝt(k ∗ p − k(s)p ) = (x∗t − x(s)t )/(k∗p − k(s)p ). By simple algebraic rearrangement, we have
k∗p = x∗ − x(s)t
ĝt(k∗p − k(s)p ) + k(s)p . (21)
The new parameter of the policy is supposed to push the source trajectory T (s) towards a target trajectory T ∗ that passes through the desired state x∗t at time t. The result of this experiment on our physical finger platform can be seen in fig. 8.
4.5 RELATED WORKS
A truly intelligent agent must develop some sort of general competence that allows it to combine primitive skills to master a range of tasks not only a single task associated with a specified reward function. The major part of such competence come from unsupervised experiences. Animals use a similar competence to quickly adapt to new environments (Weng et al., 2001). and function efficiently soon after birth before being exposed to massive supervised experience (Zador, 2019). Due to its generality, such basic skills can be inherited over generations rather than being learned from scratch (Gaier & Ha, 2019). Despite traditional RL that the learning is driven by an extrinsic reward signal, intrinsically motivated RL concerns task-agnostic learning. Similar to animals’ babies (Touwen et al., 1992), the agent may undergo a developmental period in which it acquires reusable modular skills (Kaplan & Oudeyer, 2003; Weng et al., 2001) such as curiosity and confidence (Schmidhuber, 1991a; Kompella et al., 2017). Another aspect of such general competence is the ability of the agent to remain safe during its learning and deployment period (Garcıa & Fernández, 2015). In physical systems especially continuous control, stability is a major aspect of safety that implies states of the system converge to some invariant sets or remain within a certain bound (Lyapunov, 1992). Control theory often assumes the model of the system known to guarantee stability (Khalil, 2002). In the absence of the model, model-based RL learns the model along with
the policy. Hence, learning the transition model to predict the states in the future can be another intrinsic reward.
From a technical point of view, our work is relevant to sensitivity analysis and how it is used to train the parameters of models such as in Chen et al.’s NeuralODE. The method seemed to be effective in many tasks including learning dynamics (Rudy et al., 2019) , optimal control (Han et al., 2018), and generative models (Grathwohl et al., 2018). Our method can be seen as a mode-free sensitivity analysis in real-world systems. In NeuralODE, the gradient with respect to the parameters requires solving ODEs for both states and adjoint states that require a transition model. Since we are working directly on the physical system, we don’t need to calculate the integrals forward in time. The systems itself acts as a physical ODE solver. We refer to appendix F for a more detailed review of the related works.
5 CONCLUSION
In this paper, we present a method to learn the way that the trajectories of a physical real world dynamical system changes with respect to a change in the policy parameters. We tested our method on a custom-built platform called finger robot that allows testing a couple of controllers with various settings to show the applicability of our method for linear, nonlinear, open-loop, and feedback controllers. By estimating the physical derivative function, we showed that our method is able to push a controlled trajectory towards a target intermediate state. We investigate the real-world challenges when doing a fine sensitive task such as estimating physical derivatives on a real robot and proposed solutions to make our algorithm robust to inherent imperfection and noise in physical systems. We focused mainly on low-level issues of physical derivative and showing the feasibility of estimating it robustly. We expect that physical derivatives will contribute to research areas such as safety, control with constrain satisfaction and trajectory planning, robust or safe control.
A PHYSICAL PLATFORM
In this section, we introduce the physical robot on which we tested our method. The robot is called finger platform or simply finger throughout this paper. The range of movement for the motors are [0, π], [0, π], [0, 2π] respectively. The axes of the plots throughout the paper are in radian. It consists of three articulated arms with three degrees of freedom in total (see fig. 9d). The motors {m1,m2,m3} are depicted in the figure. This naming remains consistent throughout this paper. Each arm is moved by a separate brushless DC motor and has one degree of freedom to swing in its own plane (see fig. 9a). Each arm is equipped with an encoder that measures its angle (see fig. 9b). The brushless motors are controlled by an electronic driver that receives torque values applied to each motor from a computer terminal via a CAN bus and applies the torques to the motors(see fig. 9c). Due to the imperfections of the arms, motors, and drivers, we did not use any model for the system including the inertial matrix of the robot or the current-torque characteristic function of the motors. The low-cost and safe nature of this robot makes it a suitable platform to test the idea of physical derivatives that requires applying many different controllers in the training phase.
B ADDITIONAL PLOTS ILLUSTRATING REAL WORLD CHALLENGES (SECTION 3)
C APPLICATIONS OF PHYSICAL DERIVATIVES
If we know how the states of a trajectory change as a result of a change in the policy parameters, the policy can be easily updated to push the trajectory towards a desired one. For example, assume we are interested in going from the current trajectory T (θ) to the target trajectory T ∗. The distance between these trajectories can get minimized by perturbing the policy parameters in the direction −∂‖T (θ) − T ∗‖/∂θ. This direction is already available since we have estimated ∂T (θ)/∂θ as a physical derivative. As an exemplary case, we show this application of our method in practice in section 4. Other applications of physical derivatives are in robust control and safety. In both cases, the physical derivative allows us to predict the behaviour of the system if the policy changes in a neighbourhood around a nominal policy. Then, it is possible to make sure that some performance or safety criteria will not be violated for the local perturbation in the policy. As a concrete example, for an autonomous driving system, there can be a calibration phase during which physical derivatives of the car is estimated by perturbing the controller parameters around different nominal policies which are likely to occur in real roads. The calibration must be done in a safe condition and before deploying the system. When deployed, the estimated physical derivatives can be used to predict the effect of a change of the policy on the behaviour of the system and neutralize the change if it would move the car towards unsafe regions of its state space. The command that changes the policy can be issued by a high-level controller (e.g. guidance system), and the safety is confirmed by a low-level mechanisms through physical derivatives. This work focuses on the concept and introduction of physical derivatives and direct applications would go significantly beyond the scope of this work. In the following more detailed description of the use of physical derivatives in robust and safe control.
Robust control In control theory, robust control relates to the design of a controller whose performance is guaranteed for a range of systems and controllers belonging to a certain neighborhood around the nominal system (Zhou & Doyle, 1998). It is desired to have a controller that keeps the performance of the system at a certain good level even if the parameters of the controller are not fixed to the theoretical values. Assume the performance of the system is associated with some function of a trajectory E(T ). Changing the parameters of the controller θ results in a change in the trajectories. This allows us to compute ∂T /∂θ that consequently gives us ∂E(T )/∂θ by the chain rule. Roughly speaking, between two sets of parameters θ1 and θ2, the set of parameters that gives the least ∂E/∂θ is preferred. This means that by shaking the parameters of the controller and assessing the performance of the system, an estimate of the curvature of the landscape of E(T (θ)) is obtained. We prefer flatter regions of this space where a small change in θ does not cause a drastic change in the performance metric E .
Safety Safety refers to the situations in which the agent may hurt itself or the environment and causes irreversible damages if it freely takes arbitrary actions (Garcıa & Fernández, 2015). For a safety-critical system whose full physical models are hard to obtain, the physical gradients can assist in avoiding restricting the parameters of the robot to avoid unsafe behavior. The physical derivatives are learned in the Lab environment before the robot is deployed into the wild. For example, a rover whose mission is to safely explore an unknown environment often enjoys a learning loop that allows it to adapt to the new environment. Even though the learning in the new environment requires sufficient exploration, the physical derivatives can be used to give a rough simulation of the robots next few states under a given update to its parameters. The potential harmful updates might be detected by such simulation and be avoided.
D EXTENDED SET OF SOLUTIONS TO THE REAL WORLD CHALLENGES
D.1 DETECTING ZERO CROSSING
In this method, we take advantage of special landmarks in the trajectories. The landmarks are typically caused by physical constraints of the system. For example, when a robot’s leg touches the ground, the velocity of the leg becomes zero. Likewise, when a joint reaches its physical limit, the velocity of the connected arm to the joint becomes zero or changes sign. In both cases, a zero crossing occurs that can be used as a landmark to synchronize lagged trajectories with a reference trajectory. Even though this method will eliminate the temporal noise, it requires the presence of such landmarks along the trajectories. Notice that from a mathematical point of view, there is nothing special about zero. We can pick any value of states along a reference trajectory and synchronize all other trajectories with respect to it. However, in practice, physical landmarks are easier to detect and have less ambiguity that consequently gives a more accurate synchronization.
E EXPERIMENTAL DETAILS
Starting position in all the experiments is (π2 , π 2 , π). Task’s overall details are as following:
Task number of trajectories timesteps Linear (N) 640 1500 PD controller(N) 640 1500 PD controller(U) 1000 1500 Sine 1 joint(N) 640 5000 Sine 1 joint(U) 1000 5000 Sine 2 joints(U) 640 5000 Sine 2 joints(N) 1000 5000
In normal sampling cases, we ran 10 simulations for each set of λ parameters which indicates noise level.
E.1 LINEAR
uit = wit+ bi for i = 1, 2, 3 (22)
E.1.1 GAUSSIAN SAMPLING
wi = Wi + w,i for i = 1, 2, 3
w,i ∼ N(0, ew × ‖Wi‖2) ew ∼ exp(λw) for λw = 1, 5, 10, 50, 100, 500, 1000, 5000
bi = Bi + b,i for i = 1, 2, 3
b,i ∼ N(0, eb × ‖Bi‖2) eb ∼ exp(λb) for λb = 1, 5, 10, 50, 100, 500, 1000, 5000
W = [0.00001, 0.0001,−0.00001], B = [−0.28,−0.15,−0.08]
E.2 PD CONTROLLER
Final destination is ( π10 , 3 π 4 , 7 π 12 )
E.2.1 GAUSSIAN SAMPLING
kp = KP +
kp ∼ N(0, ekp × ‖KP‖) ekp ∼ exp(λkp) for λkp = 1, 5, 10, 50, 100, 500, 1000, 5000
kd = KD +
kd ∼ N(0, ekd × ‖KD‖) ekd ∼ exp(λkd) for λkd = 1, 5, 10, 50, 100, 500, 1000, 5000
E.2.2 UNIFORM SAMPLING
kp ∼ U(−0.5, 1.5),KP = 1
kd = KD = 0.01
E.3 SINE 1 JOINT
E.3.1 GAUSSIAN SAMPLING
w = W +
w ∼ N(0, ew × ‖W‖) ew ∼ exp(λw) for λw = 1, 5, 10, 50, 100, 500, 1000, 5000
a = A+
a ∼ N(0, ea × ‖A‖) ea ∼ exp(λa) for λa = 1, 5, 10, 50, 100, 500, 1000, 5000
W = 0.01, B = 0.5
E.3.2 UNIFORM SAMPLING
w ∼ U(0.005, 0.015), a = A = 0.5
E.3.3 SINE 2 JOINTS
E.3.4 GAUSSIAN SAMPLING
wi = Wi + for i = 1, 2
w,i ∼ N(0, ew × ‖W‖2) ew ∼ exp(λw) for λw = 1, 5, 10, 50, 100, 500, 1000, 5000
ai = Ai + for i = 1, 2
a,i ∼ N(0, ea × ‖A‖2) ea ∼ exp(λa) for λa = 1, 5, 10, 50, 100, 500, 1000, 5000
W = [0.01, 0.01], A = [−0.4, 0.5]
E.3.5 UNIFORM SAMPLING
wi ∼ U(0.005, 0.015) for i = 1, 2, a = A = 0.5
E.4 GP SCORE:
Definition of the GP score: The score is defined as (1−u/v), where u is the residual sum of squares Σ(ytrue−ypred)2 and v is the total sum of squares Σ(ytrue−mean(ytrue))2. The best possible score is 1.0.
E.5 ZERO-SHOT PLANNING TASK:
For the task of section 4.4: Number of training trajectories: 100 each with 1500 time steps
Kd = 0.01
Kp = Uniformly sampled from [0.2, 0.6]
Initial point: X◦ = [π/2, π/2, π])
desired position = [π/10, 3 ∗ π/4, 7 ∗ π/12]
F DETAILED LITERATURE REVIEW
There has been a recent surge of interest in unsupervised methods in reinforcement learning when a task-specific reward function is not the only driving force to train the agent (Baranes & Oudeyer, 2013; Bellemare et al., 2016; Gregor et al., 2016; Hausman et al., 2018; Houthooft et al., 2016). A truly intelligent agent must behave intelligently in a range of tasks not only in a single task associated with its reward function. This requires the agent to develop some sort of general competence that allows it to come up with solutions to new problems by combining some low-level primitive skills. This general competence is a key factor in animals to quickly and efficiently adapt to a new problem (Weng et al., 2001). By calling the traditional RL, extrinsicially motivated RL, the new framework is called intrinsically motivated RL. There have been many ideas in this line with various definitions for the terms motivation and intrinsic. Some researchers assume a developmental period in which the agent acquires some reusable modular skills that can be easily combined to tackle more sophisticated tasks (Kaplan & Oudeyer, 2003; Weng et al., 2001). Curiosity and confidence are other unsupervised factors that can be used to drive the agent towards unexplored spaces to achieve new skills (Schmidhuber, 1991b; Kompella et al., 2017). Interestingly, there are observations in neuroscience that dopamine, a known substance that controls one’s motivation for extrinsic rewards, is also associated with intrinsic properties of the agent such as novelty and curiosity. A novel sensory stimulus activates the dopamine cells the same way they are activated by extrinsic reward. Children build a collection of skills accumulatively while they engage in activities without a specific goal, e.g., hitting a ball repeatedly without a long-term target such as scoring a goal. The achieved skills contribute to their stability while handling objects (Touwen et al., 1992).
<latexit sha1_base64="bCZFEUWsQlA6jPW7lSpTBjLMvvo=">AAACEHicbVC7SgNBFJ2NrxhfUUubwSDGwrAbBW2EoBaWEcwDsiHMTmaTIbOz68xdJSz7CTb+io2FIraWdv6Nk0eh0QMXDufcy733eJHgGmz7y8rMzS8sLmWXcyura+sb+c2tug5jRVmNhiJUTY9oJrhkNeAgWDNSjASeYA1vcDHyG3dMaR7KGxhGrB2QnuQ+pwSM1MnvuzLGZ9j1FaFJt+jqWwWJ46T4EJcP0sS9ZAIIvk87+YJdssfAf4kzJQU0RbWT/3S7IY0DJoEKonXLsSNoJ0QBp4KlOTfWLCJ0QHqsZagkAdPtZPxQiveM0sV+qExJwGP150RCAq2HgWc6AwJ9PeuNxP+8Vgz+aTvhMoqBSTpZ5McCQ4hH6eAuV4yCGBpCqOLmVkz7xGQDJsOcCcGZffkvqZdLzlGpfH1cqJxP48iiHbSLishBJ6iCrlAV1RBFD+gJvaBX69F6tt6s90lrxprObKNfsD6+AV2Om4o=</latexit>
Another line of work concerns the fundamental constraints of the agent/environment and ensures those constraints are met while learning. For example, in many practical systems, learning episodes must halt if the system is likely to undergo an irreversible change. For example, the training episodes of a fragile robot must ensure the robot does not fall or will not be broken in any circumstance while acting under a certain policy. The general name safe RL embodies ideas to tackle such issues in current interactive learning algorithms (Garcıa & Fernández, 2015). One major aspect of safety is stability that loosely means that states of the system converge to some invariant sets or remain within a certain bound (Lyapunov, 1992). Control theory enjoys a physical model of the system to guarantee stability (Khalil, 2002). When the physical model is not known in advance, the model is either learned along with the policy (model-based RL) or will be implicitly distilled in the value function (model-free RL) (Sutton & Barto, 2018). Stability can be categorized as an intrinsic motivation for the agent. No matter what task the agent aims to solve, it must remain stable all the time. Learning the transition model which is the major concern of model-based RL can also be seen as intrinsic motivation. The agent learns to predict the future step given the current state. The advantage of learning a model—even inaccurately—is twofold: the agent would know where to go and where not to go. It knows which regions of the state space is unsafe to explore and must be avoided. It also knows which regions are unexplored and might be informative to improve the model. This brings us to another view to intrinsic reward that encourages diversity.
Our work is also relevant to sensitivity analysis and its use in trainig the parameters of dynamical models. After Chen et al.’s NeuralODE on training neural networks by sensitivity analysis of the network parameters, the method was successfully applied to various tasks such as learning dynamics (Rudy et al., 2019) , optimal control (Han et al., 2018), and generative models (Grathwohl et al., 2018). Our method can be seen as a mode-free sensitivity analysis in real-world systems. In NeuralODE, the gradient with respect to the parameters requries solving ODEs for both states and adjoint states that require a transition model. Since we are working directly on the physical system, we don’t need to calculate the integrals forward in time. The systems itself acts as a physical ODE solver.
The importance of learning from unlabelled experiences is a known fact in animals. Many animals function efficiently soon after birth before being exposed to a massive labeled experience. Part of it might be due to unsupervised learning but the major part of the story can be a genetic heritage after years of evolution that Zador called genomic bottleneck. The same idea turned out to be valid in statistical learning where an automatically discovered neural network architecture peforms surpsingly well with a shared random weight (Gaier & Ha, 2019). The embedded inductive bias in the neural network architectures could be analogous to the wiring of the brain of animal babies which transfers from generation to generation by genes.
F.1 PROOFS
Proof to the lemma on voxelization error.
Proof. The voxels become boxes in 3D as in fig. 15. The gradient is estimated as the distance between two points in 3D coordinates. Hence the source of voxelization error is approximating the distance between two points in 3D with the distance between the centers of the corresponding boxes to which those points belong. This error is written next to the boxes in fig. 15. The maximmum error is inversely proportional to the distance between voxels. Meaning that the voxels which are located far away will induce less voxelization error. This is intuitively clear. When two points are too distant from each other, a slight change in their position would not change the distance between them considerably. The upper bound on the error, however, occurs for a single voxel where the error is bounded by the size of the voxel.
G MORE RESULTS
In this section, the results of the extra experiments that were eliminated from the main text due to the space limit are presented.
The following figures show GP models trained by a set of directional derivatives collected during the shaking phase. The results are provided for the experiments of sections 4.2 and 4.3.
T = 2350 T = 3350 T = 3850
T = 270 T = 390 T = 500 T = 830
T = 70 T = 110 T = 210
T = 1090 T = 1210 T = 1330 T = 1440
T = 250 T = 310 T = 360T = 190
T = 210 T = 280 T = 460 T = 710
T = 270 T = 370 T = 420 T = 550
T = 2450 T = 3350
T = 1250 T = 1550 T = 1850 T = 2150
T = 1250 T = 1550 T = 1850 T = 2150
T = 50 T = 350 T = 650 T = 950 | 1. What are the strengths and weaknesses of the paper regarding its contribution to learning a reward/task-independent model for robotic control?
2. Do you have any concerns regarding the authors' approach to system identification, particularly in terms of data-driven methods and the absence of suitable excitation?
3. How does the reviewer assess the evaluation methodology and experimental results presented in the paper?
4. Are there any specific issues or concerns regarding the presentation and proof of Lemma 3?
5. What are some other important unanswered questions that the reviewer believes the authors should consider in their future work? | Review | Review
This paper addresses a very good question - can we do better in terms of model learning, so that we can find the much sought after middle ground between model free and model based RL. In particular, the authors ask, can we find a way to learn a model that is reward/task independent, so that a new task can be equally well handled. This is timely and the general thrust of the thinking, in terms of learning from perturbation around trajectories, is good but I am not sure the proposed methods are sufficiently well developed to merit publication. I am also concerned that the authors do not consider numerous issues with the setup that are fairly well understood as issues for system identification.
The main idea, as laid out in §1.1, is to observe that the parameter update depends mainly on the way a small perturbation in parameters is reflected as a variation in the optimal trajectory (by asking for the probability of a trajectory, this variation becomes the probability of a nearby trajectory). The authors then approach the approximation of this in terms of a discrete finite differences estimate. There are some extensions, such as using a local GP model instead of a local linear model and consideration of ways in which the system might not be exactly repeatable given initial states. These are all proper questions but there are many more important unanswered ones:
1. Starting with where the model setup begins, it is not clear why a complex nonlinear dynamical system, i.e., the typical multi-jointed robot taken as a dynamical system (so, not just kinematics and quasi-static movements), can be sufficiently well approximated using a discretised finite point set that is used at the start of §2 - how does one find the correct T, the correct step size, how does one change these for the local nature of the dynamics (some places might be smoother than others, in phase space), etc.? Even more importantly, are we assuming we know the proper state space ahead of time so that there is no history dependence due to unobserved variables?
2. As such, the authors are proposing to perform closed-loop system identification in a completely data-driven manner. It is well known that this is hard because in the absence of suitable excitation, not all necessary modes in the dynamics will be observed. The only controlled example considered, in §4.3, and subsequent discussion about 'zero-shot' generalisation is getting at this. However, neither at the conceptual level nor in terms of the detailed experiment do I see a good account of what allows this approach to learn all aspects of the dynamics of the system from just small perturbations around a closed loop trajectory.
3. In light of all this, I find the evaluation really weak. Some experiments I would have liked to have seen include - (i) a control experiment based on a standard multi-link arm to show how bad the issue of model mis-match is for the task being considered (I suspect, not much), (ii) experiments with local linearizations, and perhaps piecewise local linearizations, to show how much innovation is needed or is being achieved by the proposed advances, (iii) for us to be talking about 'zero shot' generalisation and the like, more sophisticated tasks beyond merely changing the reaching point (as I say before, it is not even clear that a good PID controller with a roughly plausible linearization is not sufficient to achieve similar effects, and certainly there is a plethora of more sophisticated baselines one could have drawn upon).
4. Some of the discussion comes across as a bit naive, e.g., we have a lemma 3 whose proof is simply a geometric argument about cubes without sufficient consideration of properties of dynamics. I don't doubt the result but in the way it is presented here, it seems shoddy.
Also, some smaller questions not properly explained:
a. How do you know which kernels for good for the GP in equations 9-10?
b. Why should we expect the correlation procedure in §3.0.1 to always work without aliasing and what is the way to get at the suitable domain? |
ICLR | Title
Learning by shaking: Computing policy gradients by physical forward-propagation
Abstract
Model-free and model-based reinforcement learning are two ends of a spectrum. Learning a good policy without a dynamic model can be prohibitively expensive. Learning the dynamic model of a system can reduce the cost of learning the policy, but it can also introduce bias if it is not accurate. We propose a middle ground where instead of the transition model, the sensitivity of the trajectories with respect to the perturbation (shaking) of the parameters is learned. This allows us to predict the local behavior of the physical system around a set of nominal policies without knowing the actual model. We assay our method on a custom-built physical robot in extensive experiments and show the feasibility of the approach in practice. We investigate potential challenges when applying our method to physical systems and propose solutions to each of them. (a) (b) (c) (d) Figure 1: Physical finger platform in action with different policies.
1 INTRODUCTION
Traditional reinforcement learning crucially relies on reward(Sutton & Barto, 2018). However, reward binds the agent to a certain task for which the reward represents success. Aligned with the recent surge of interest in unsupervised methods in reinforcement learning (Baranes & Oudeyer, 2013; Bellemare et al., 2016; Gregor et al., 2016; Hausman et al., 2018; Houthooft et al., 2016) and previously proposed ideas (Schmidhuber, 1991a; 2010), we argue that there exist properties of a dynamical system which are not tied to any particular task, yet highly useful, and their knowledge can help solve other tasks more efficiently. This work focuses on the sensitivity of the produced trajectories of the system with respect to the policy so called Physical Derivatives. The term physical comes from the fact that it uses the physics of the system rather than any idealized model. We learn a map from the directions in which policy parameters change to the directions in which every state of the trajectory changes. In general, our algorithm learns the Jacobian matrix of the system at every time step through the trajectory. The training phase consists of physically calculating directional derivatives by the finite difference after applying perturbed versions of a nominal policy (a.k.a. controller). Perturbing the parameters of the controller is the reason for naming our method shaking. The test phase uses these directional derivatives to compute derivatives along unseen directions. Due to the difficulty of computing the Jacobian matrix by the finite difference in higher dimensions, we use random controllers joint with probabilistic learning methods to obtain a robust estimate of the Jacobian matrix at each instant of time along a trajectory. We are capable of this generalization to unseen perturbations because the trajectories of physical systems live on an intrinsic low-dimensional manifold and change slowly with the small changes in the parameters of the system (Koopman, 1931). This assumption holds as long as the system is not chaotic or close to a bifurcation condition (Khalil, 2002).
1.1 PRELIMINARIES
A reward function describes how close the agent is to the solution of the target task. In the absence of the reward, the agent will be given no means to find its way towards the solution. Let x ∈ X ⊆ Rd be a d-dimensional state vector that fully describes the environment with which the agent interacts. At each state, the agent is allowed to take action u ∈ U ⊆ Rq from a q-dimensional action space via a parameterised policy function u = π(x;θ). The agent will be rewarded r(x,u) by the function r : X × U → R when it takes action u at state x. The goal of learning is to update θ such that some desired target is achieved. The target can be anything as long as a concrete reward function is associated with it. In stochastic cases, return R : Π(Θ) → R is defined as a cumulative future discounted reward whose expectation is often of main interest. For parametric policies, the space of feasible parameters Θ has a one-to-one correspondence to the policy space Π. The agent who takes on the policy π from state x0 produces the trajectory T ∈ T where T is the space of possible trajectories. For a return function R : T→ R, the expected return becomes a function of the policy as J(πθ) = ET {R(T )} where the expectation is taken with respect to the probability distribution P (T |πθ). There exist two major classes of approaches in reinforcement learning: value-based methods and value-free methods. In the first class, a surrogate function is defined to approximate the value of either a state V (x) or a state-action pair Q(x,u). The policy is updated such that the agent tends towards states with higher values. The value-free methods update the policy directly without any need for an auxiliary function such as V or Q. This paper mainly concerns the second class. The policy parameters are updated as
θt+1 = θt + α ∂J(πθ)
∂θ ∣∣∣∣ θ=θt
(1)
and the gradient ∂J(πθ)/∂θ is written as
∂J(πθ)
∂θ = ∫ T ∂p(T |πθ) ∂θ R(T ) dT (2)
which is normally difficult to compute in practice. As can be seen in eq. (2), the integrand of the r.h.s. consists of two terms. The second term R(T ) is the return which is defined according to the target task. Hence, this term is task-dependent. The first term ∂p(T |πθ)/∂θ though shows how the trajectories change with respect to a change in the policy. Notice that there is no notion of reward or any task-dependent quantities in this term. For an empirical distribution pe(T |π) = 1 M ∑M i=1 δ(T − T (i)), the dependence of partial derivative of the distribtion of T on the partial derivative of T can be explicitely derived as
∂pe(T |πθ) ∂θ = 1 M M∑ i=1 u1(T − T (i)) ∂T ∂θ
(3)
where u1 is the unit doublet function (derivative of the Dirac delta function). This examplary distribution makes it clear that the change in the distribution of trajetories relates to the change of the trajectories themselves. As an unsupervised object, ∂T /∂θ is of main interest in this paper.
1.2 PHYSICAL DERIVATIVE
In this paper, we investigate the feasibility of learning a less explored unsupervised quantity, the so called Physical Derivative which is computed directly from the physical system. In abstract terms, we perturb the policy and learn the effect of its perturbation on the resulting trajectory. The difference from traditional RL whose algorithms are based on eq. (1) is the absence of a specified reward function. Instead, we generate samples from ∂p(T |πθ)/∂θ of eq. (2) that makes it possible to compute ∂J(πθ)/∂θ for an arbitrary return function R. If the exact model of the system is known, control theory has a full set of tools to intervene in the system with stability and performance guarantees. When the system is unknown, one could identify the system as a preliminary step followed by a normal control synthesis process from control theory (Ljung, 2001). Otherwise, the model and the policy can be learned together in a model-based RL (Sutton, 1996) or in some cases adaptive control (Sastry & Bodson, 2011). We argue that learning physical derivatives is a middle ground. It is not model-based in the sense that it does not assume knowing the exact model of the system. Rather, it knows how the trajectories of the system change as a result of perturbing the policy
parameters. This differential information of the system has applications in many downstream tasks. This work focuses on the concept and introduction of physical derivatives and direct applications would go significantly beyond the scope of this work. Few potential applications are discussed with more details in appendix C.
Our contributions— In summary, the key contributions of the current paper are as follows:
• A method to generate training pairs to learn the map from the policy perturbations to the resulting changes in the trajectories.
• Learning the above map as a probabilistic function and showing that it generalizes to unseen perturbations in the policy.
• Use the inverse of the above map to perturb the policy in the desired direction to achieve certain goals without conventional RL methods.
• Use a physical custom-built robotic platform to test the method and propose solutions to deal with the inherent issues of the physical system to ensure the practicality of the method (see fig. 1 for images of the platform and and appendix A for technical details).
• The supplementary materials for the paper, including code and the videos of the robot in action can be found in https://sites.google.com/view/ physicalderivatives/
2 METHOD
In this section, we describe our pipeline to estimate the physical derivatives and our proposed solutions to the inevitable challenges that are likely to occur while working with a real physical robot. We are interested in ∂T /∂θ which denotes how a small change in the parameters θ of the controller results in a different trajectory produced by the system. We normally consider a finite period of time [0, T ] and the trajectory is an ordered list of states T = [x0,x1, . . . ,xT ] where the subscript shows the time step. Therefore, having ∂T /∂θ is equivalent with having ∂xt/∂θ for every t ∈ {1, . . . , T}. Notice that the initial state x0 is chosen by us. Hence we can see it either as a constant or as a changeable parameter in θ. We kept it fixed in our experiments.
Assume xt ∈ Rd and θ ∈ Rm. Hence,∇θxt = ∂xt/∂θ ∈ Rd×m where the tth row of this matrix is ∇θxit = (∂xit/∂θ)T ∈ Rm showing how the ith dimension of the state vector changes in response to a perturbation in θ. The directional derivative of xit in the direction δθ is defined as
∇δθθ xit = 〈∇θxit, δθ
|δθ| 〉. (4)
If (4) is available form linearly independent and orthonormal directions, {δθ(1), δθ(2), . . . , δθ(m)}, the directional derivative along an arbitrary δθ can be approximated by
∇δθθ xit = m∑ j=1 cj〈∇θxit, δθ(j)〉 (5)
where cj = 〈δθ, δθ(j)〉 is the coordinates of the desired direction in the coordinate system formed by the orthonormal bases.
In practice, m directions δθ(j) can be randomly chosen or can be along some pre-defined axes of the coordinate system. To compute 〈∇θxit, δθ(j)〉, the nominal policy parameters θ are perturbed by δθ(j) as θ(j) ← θ + δθ(j) and the derivative is computed as
〈∇θxit, δθ(j)〉 = lim h→0 xit(θ + hδθ (j))− xit(θ) h . (6)
This quantity is often approximated by finite difference where h takes a small nonzero value. By perturbing the parameters θ along m orthonormal directions δθ(j) and computing the approximate directional derivative by (6), ∇δθθ xit can be computed along every arbitrary direction δθ, meaning that, we can compute∇θxit by evaluating it along any direction which is the aim of this paper.
✓1 <latexit sha1_base64="b2Ff/oUFJw0eznxXS1RygRK2bZk=">AAAB73icbVBNS8NAEJ3Ur1q/qh69LBbBU0mqoMeiF48V7Ae0oWy2m3bpZhN3J0IJ/RNePCji1b/jzX/jts1BWx8MPN6bYWZekEhh0HW/ncLa+sbmVnG7tLO7t39QPjxqmTjVjDdZLGPdCajhUijeRIGSdxLNaRRI3g7GtzO//cS1EbF6wEnC/YgOlQgFo2ilTg9HHGnf65crbtWdg6wSLycVyNHol796g5ilEVfIJDWm67kJ+hnVKJjk01IvNTyhbEyHvGupohE3fja/d0rOrDIgYaxtKSRz9fdERiNjJlFgOyOKI7PszcT/vG6K4bWfCZWkyBVbLApTSTAms+fJQGjOUE4soUwLeythI6opQxtRyYbgLb+8Slq1qndRrd1fVuo3eRxFOIFTOAcPrqAOd9CAJjCQ8Ayv8OY8Oi/Ou/OxaC04+cwx/IHz+QPOtY/Q</latexit>
✓2 <latexit sha1_base64="oe3DagNbCs6bjj10ybLfZH9d5SY=">AAAB73icbVBNS8NAEJ3Ur1q/qh69LBbBU0mqoMeiF48V7Ae0oWy2m3bpZhN3J0IJ/RNePCji1b/jzX/jts1BWx8MPN6bYWZekEhh0HW/ncLa+sbmVnG7tLO7t39QPjxqmTjVjDdZLGPdCajhUijeRIGSdxLNaRRI3g7GtzO//cS1EbF6wEnC/YgOlQgFo2ilTg9HHGm/1i9X3Ko7B1klXk4qkKPRL3/1BjFLI66QSWpM13MT9DOqUTDJp6VeanhC2ZgOeddSRSNu/Gx+75ScWWVAwljbUkjm6u+JjEbGTKLAdkYUR2bZm4n/ed0Uw2s/EypJkSu2WBSmkmBMZs+TgdCcoZxYQpkW9lbCRlRThjaikg3BW355lbRqVe+iWru/rNRv8jiKcAKncA4eXEEd7qABTWAg4Rle4c15dF6cd+dj0Vpw8plj+APn8wfQOY/R</latexit>
✓1 <latexit sha1_base64="b2Ff/oUFJw0eznxXS1RygRK2bZk=">AAAB73icbVBNS8NAEJ3Ur1q/qh69LBbBU0mqoMeiF48V7Ae0oWy2m3bpZhN3J0IJ/RNePCji1b/jzX/jts1BWx8MPN6bYWZekEhh0HW/ncLa+sbmVnG7tLO7t39QPjxqmTjVjDdZLGPdCajhUijeRIGSdxLNaRRI3g7GtzO//cS1EbF6wEnC/YgOlQgFo2ilTg9HHGnf65crbtWdg6wSLycVyNHol796g5ilEVfIJDWm67kJ+hnVKJjk01IvNTyhbEyHvGupohE3fja/d0rOrDIgYaxtKSRz9fdERiNjJlFgOyOKI7PszcT/vG6K4bWfCZWkyBVbLApTSTAms+fJQGjOUE4soUwLeythI6opQxtRyYbgLb+8Slq1qndRrd1fVuo3eRxFOIFTOAcPrqAOd9CAJjCQ8Ayv8OY8Oi/Ou/OxaC04+cwx/IHz+QPOtY/Q</latexit>
✓2 <latexit sha1_base64="oe3DagNbCs6bjj10ybLfZH9d5SY=">AAAB73icbVBNS8NAEJ3Ur1q/qh69LBbBU0mqoMeiF48V7Ae0oWy2m3bpZhN3J0IJ/RNePCji1b/jzX/jts1BWx8MPN6bYWZekEhh0HW/ncLa+sbmVnG7tLO7t39QPjxqmTjVjDdZLGPdCajhUijeRIGSdxLNaRRI3g7GtzO//cS1EbF6wEnC/YgOlQgFo2ilTg9HHGm/1i9X3Ko7B1klXk4qkKPRL3/1BjFLI66QSWpM13MT9DOqUTDJp6VeanhC2ZgOeddSRSNu/Gx+75ScWWVAwljbUkjm6u+JjEbGTKLAdkYUR2bZm4n/ed0Uw2s/EypJkSu2WBSmkmBMZs+TgdCcoZxYQpkW9lbCRlRThjaikg3BW355lbRqVe+iWru/rNRv8jiKcAKncA4eXEEd7qABTWAg4Rle4c15dF6cd+dj0Vpw8plj+APn8wfQOY/R</latexit>
Figure 2: Gaussian (left) and uniform (right) shaking examples.
In the matrix form for x ∈ Rd, we can compute ∇δθ(j)θ x = [∇δθ (j) θ x1,∇δθ (j) θ x1, . . . ,∇δθ (j) θ xd] T in a single run by computing (6) for all d dimensions of the states. Let’s define
∆θx , [∇δθ (1) θ x,∇δθ (2) θ x, . . . ,∇δθ (m) θ x] (7)
where ∆θx ∈ Rd×m and let Λ = [δθ(1), δθ(2), . . . , δθ(m)]. Therefore, if ∆δθθ x shows the directional derivative of x along δθ, we can write it as:
∇δθθ x = ∆θx(ΛTδθ) (8)
which is only a vectoral representation of eq. (4). Even though the linear formula of eq. (8) requires only m directional derivatives, it has two major downsides. First, it does not give a clear way to incorporate more than m training directional physical derivatives. Second, the linear approximation remains valid only for very small δθ. We propose Gaussian Process (GP) as a nonlinear probabilistic function approximator (Rasmussen, 2003) to capture the maps ĝt defined as
ĝt : Θ→ X (9) ĝt(δθ) = δx (10)
where subscript t shows the function that maps δθ to the change of the states δxt at time step t. We considered distinct functions for every time step. Taking into account the commonality among the function approximators corresponding to different time steps is deferred to future research. Learning this map requires training data that comes from an initial data collection phase called shaking. Shaking refers to perturbing parameters of the controller to obtain the set of trajectories produced by the perturbed controllers.
The perturbation can be either regular or stochastic. Stochastic perturbations have the advantage over regular perturbations that the agent does not need to be worried about perturbing the parameters in a particular direction. Besides, in some cases, perturbing the parameters of the policy in certain directions is infeasible. We propose two methods of shaking called Gaussian and Uniform shaking.
Gaussian shaking— Likely values of θ create nominal policies encoded by {θ(1),θ(2), . . . ,θ(m)}. We put Gaussian distributions centered at each of the nominal values resulting in a mixture of Gaussians. To reduce the hyper-parameters, we assume the variances of the Gaussians are themselves sampled from an exponential distribution making sure they all take positive values (See fig. 2 left). Here, we manually choose a reasonable value for the rate parameter of the exponential distribution. Doing inference on the hyper-parameters of the sampling distributions can be a topic for future research especially in active learning for a more clever less costly sampling stratgey.
Uniform shaking— In this setting, the state space of the changeable parameters of the policy is discretized and a uniform distribution is assumed around each value of this grid with some overlapping with the neighboring cells (See fig. 2 right).
We show the effect of each of these sampling methods later in section 4. We observed that the results are less sensitive to the hyper-parameters of the uniform sampling than Gaussian sampling. A carelessly chosen rate for the exponential distribution that generates the variances of the Gaussians in Gaussian sampling can result in too local or global sampling that gives rise to a large variance or bias in the estimated gradients.
3 REAL WORLD CHALLENGES
In this section, we present two major low-level challenges that are common when dealing with physical systems. There exist inherent noise and imperfection in the system that results in a change in the produced trajectories while the policy parameters are kept fixed. In our finger platform, we observed two different major sources of noise which are likely to occur in other physical systems too. We call them temporal and spatial noise for the reasons that come in the following.
Temporal noise. The temporal noise represented by n affects trajectories by shifting them in time xt ← xt + n for t = 0, 1, . . . , T. (11)
Notice that the absence of subscript t in n shows that this noise is not time-dependent, i.e., the time shift does not change along the trajectory as time proceeds.
Spatial noise. The trajectories affected by spatial noise cannot be aligned with each other by shifting forward or backward in time. We can model this noise as a state-dependent influence on the state of the system at every time step.
xt ← xt + nxt (12)
The following definition makes the distinction more concrete. Definition 1. Consider two trajectories T (1)(t) and T (2)(t) as two temporal signals. Assume St◦ is the shift-in-time operator defined as
St◦T (t) = T (t+ t◦) (13) for an arbitrary function of time T (t). We say T (2)(t) is temporally noisy version of T (1)(t) if
∃t◦ ∈ R s.t. ‖T (2) − St◦T (1)‖1 ≤ (14) where is a hyper-parameter threshold that reflects our prior confidence about the accuracy of the motors, joints, physical and electrical elements (in general construction process) of the robot. On the other hand, T (2) is called a spatially noisy version of T (1) if
@t◦ ∈ R s.t. ‖T (2) − St◦T (1)‖1 ≤ (15)
3.0.1 SOLUTION TO TEMPORAL NOISE
Fortunately, this type of noise is not state-dependent by definition. If we find out how much a trajectory is shifted in time with respect to another trajectory, we can simply shift the trajectory for those many time steps and compensate for the delay. Hence, the problem becomes detecting the lagged trajectories with respect to a reference trajectory and also estimate the amount of the required time shift to compensate for the delay. We can either use physical landmarks in the trajectories to align them or use the correlation between them as a measure of alignment. The later gave better results, hence, we postpone the description of the former to the appendix D.1.
Correlation-based delay estimation In this method, we use the correlation between zero-meaned trajectories T (i) and T (j) to check if one is the lagged version of the other one. The delay τ is found by
τ∗ = argmax τ T−τ∑ t=0 〈Sτx(i)t ,x(j)t 〉 (16)
where Sτ is a shift-operator by τ ∈ Z time steps. In practice, we take one trajectory of {T (1), T (2), . . . , T (M)}, e.g. T (r) as the reference and synchronize other trajectories with respect to it using eq. (16). The trajectories must be initially normalized to avoid trivial solutions where every trajectory is pushed towards the larger parts of the reference trajectory. For illustrative purposes, the plots of fig. 14 show a sample of the lagged trajectory from the finger platform and its correction by the above method.
3.1 SOLUTION TO SPATIAL NOISE
The spatial noise can be a stochastic function of the actuator, environmental change, and electronic drivers. In a perfect model of the transition dynamics xt+1 = f(xt,ut), applying the same control sequence {u0,u1, . . . ,uT−1} always results in the same sequence of states {x1,x2, . . . ,xT } when it starts from the same initial state x0. This assumption is often violated in physical systems as different runs of the same policy may result in different trajectories as can be seen in fig. 10 in the Appendix. The noise in the dynamics can be any function of states, input, and time. Therefore, it is difficult to model this noise since it requires a prohibitively large number of random experiments. The good news is that if the physical system is built properly, the effect of this noise is expectedly low. Based on our observations from the finger platform, we can assume the following.
Assumption 2. Limit on the physical noise: Let’s the control sequence U = {u0,u1, . . . ,uT−1} be applied to the system M times resulting in multiple sequence of states T (1), T (2), . . . , T (M). There exists a relatively small ζ such that
‖T (i) − T (j)‖∞ ≤ ζ for every i, j ∈ {1, 2, . . . ,m}. (17)
The word relatively here means that the change of the trajectory due to the inherent physical noise of the system must be small compared to the change of the trajectories when the parameters of the policy are perturbed.
To reduce the sensitivity of the estimated gradient to this unwanted spatial noise, we divide the state space of the physical system into regularly located adjacent cells called voxels. Each voxel vox(c) is represented by its center c and is defined as
vox(c) = {x ∈ X | ‖x− c‖∞ ≤ γ} (18) where γ is the parameter of the voxelization. The concept of the voxel is roughly used as a superstate. Every state that ends up within vox(c) gives rise to the same superstate. After recording the trajectories form the robot, every state is mapped to the center of the voxel it belongs to as
c← x for x ∈ vox(c) (19) After voxelization, we work with c instead of x. For example, all the gradients of (7) are computed as ∇θc rather than ∇θx. To illustrate the positive effect of voxelization of the state space, it can be seen in fig. 3 that increasing the voxel size improves the overlapping between two trajectories that deviate from each other due to the inherent spatial noise of the system not because of perturbing the parameters of the policy, but because of the inherent imperfection of the mechanical and electrical components of the system. This benefit comes with a cost which is the error introduced by voxelization. Fortunately, this error is bounded due to the following lemma Lemma 3. The error caused by voxelization is bounded and inversely proportional to the size of each voxel (see appendix F.1 for a brief proof).
After dealing with the challenge of inherent noise, we pursue the main goal of this paper which is estimating ∂T /∂θ directly from the physical system. In the following, we investigate the use of the different type of controllers to emphasize the extent of applicability of the proposed method.
4 EXPERIMENTS
In this section, we show how physical derivatives can be estimated in practice through several experiments. Notice that our work is different from computing gradients around the working point of a system by finite-difference. We aim to collect samples from such gradients by perturbing a grid of nominal values of the policy parameters and then generalize to unseen perturbations by Gaussian process as a probabilistic regression method. The experiments are designed to show each challenge separately and the efficacy of our proposed solution to it. Due to space constraints, details to the physical platform can be found in section A in the Appendix. See1 for videos of the robot while collecting data for different experiments and more backup materials.
1https://sites.google.com/view/physicalderivatives/
m 3
T = 10 T = 40 T = 80 T = 200
T = 200 T = 400 T = 600 T = 800
4.1 LINEAR OPEN-LOOP CONTROLLER
As a simple yet general policy, in this section, we consider an open-loop controller which is a linear function of time. The policy ut = [u1t, u2t, u3t] constitutes the applied torques to the three motors {m1,m2,m3} of the system and is assigned as
uit = wit+ bi for i = 1, 2, 3 (20)
Notice that the torque consists of two terms. The first term wit grows with time and the second term remains constant. The controller has 6 parameters in total denoted by θ. The task is to predict∇θxt for every t along the trajectory. In the training phase, the training data is obtained via shaking as described in section 2.
fig. 7 shows examples of nominal trajectories + trajectories produced by the perturbed controller and the computed derivatives. The arrows are plotted as they originate from the perturbed trajectories only for easier distinction. Each arrow corresponds to the change of the states at a certain time step on the source trajectory as a result of perturbing the policy. Each figure corresponds to a pair of nominal values of {w, b} for the linear open-loop controller. See fig. 29 for examples.
4.2 NONLINEAR OPEN-LOOP CONTROLLER
Physical derivatives can naturally be computed for either linear or nonlinear controllers which makes it different from taking the gradient of models through time. In model-based methods, if the model’s transition dynamics is not differentiable, taking the derivative is theoretically challenging. However, our method is taking advantage of the real physics of the system to compute the gradients regardless of whether the approximating model is differentiable or not. To elaborate more on this, we test our method for a simple but nonlinear policy, i.e., ut = A sin(ωt). The sinusoidal torque is applied
to either one or two motors of the system to investigate the performance of our method. We tested Gaussian and uniform shaking for θ = {A, ω} as parameters of this controller. The GP interpolation for the partial derivatives at some time instances along the trajectory can be seen in fig. 6 and more extensively in figs. 16 to 18 in the Appendix. One might be interested in the direction of the predicted derivative instead of its exact size. To this end, we take several test perturbations for every time step and use cos(α) as a measure of alignment between the predicted and ground-truth derivative vectors. The time evolution of the histogram of this measure along the trajectory shows a better alignment as time proceeds. This effect can be seen in figs. 27 and 28. This confirms our observation of initial transient noise in the system that dies out gradually by the progression of time. The overall performance of our method in predicting physical derivatices in unseen directions for two different shaking methods is shown in appendix E.
4.3 FEEDBACK CONTROLLER
Often in practice, the policy incorporates some function of the states of the system. Some wellknown examples which have been extensively used in control applications are P, PD, PI and PID controllers. Here, we consider two member of this family, i.e., P and PD controllers. The policy becomes u = Kpe for P controllers and u = Kpe + Kdė for PD controllers. The error e shows the difference between the current state x and the desired state x∗. The parameters of the controller {Kp,Kd} are scalar values that are multiplied by the error vector element wise. This implies that the controller parameters are the same for three motors leaving the controller of the whole platform with two parameters that weights the value and the rate of the error. We applied the uniform and Gaussian shaking for the set of parameters θ = {Kp,Kd} with different scenarios. The GP interpolation for the physical derivatives at some time instances along the trajectory can be seen in fig. 6 and more extensively in figs. 19 to 24 in the Appendix. The time evolution of the histogram of misalignment between predicted and ground-truth directional derivatives (see figs. 25 and 28 in the appendix) once again confirms the existence of the initial transient noise as was also observed in the section 4.2. Similar to the sinusoidal experiment, the overall performance of our method is presented in appendix E.
m 3
4.4 ZERO-SHOT PLANNING TASK
Our previous experiments in sections 4.1,4.2 and 4.3 showed that learning the physical derivative map is feasible for various types of controllers. In this section, we demonstrate an example of a constrain satisfaction task by means of the physical derivative map. In this experiment, the superscript (s) corresponds to the nominal trajectory which is called source. Assuem the system is controlled by a PD controller to reach a target state x∗, i.e., the control torques are designed as u = k
(s) p (x − x∗) + k(s)d ẋ. The controller does a decent job to reach the target state given rea-
sonable values for kp and kd. However, such controller does not give us a clear way to shape the trajectory that starts from x◦ and ends at x∗. Assume it is desired that the nominal controlled trajectory T (s) passes through an intermediate state x∗t at time t on its way towards the target state x∗ (we can equally assume that the system must avoid some regions of the state space because of safety reasons). The solution with physical derivatives is as follows . Assume k(s)d is fixed and only k(s)p is changeable. If the physical derivatives map is available, we have access to ĝt(k ∗ p − k(s)p ) = (x∗t − x(s)t )/(k∗p − k(s)p ). By simple algebraic rearrangement, we have
k∗p = x∗ − x(s)t
ĝt(k∗p − k(s)p ) + k(s)p . (21)
The new parameter of the policy is supposed to push the source trajectory T (s) towards a target trajectory T ∗ that passes through the desired state x∗t at time t. The result of this experiment on our physical finger platform can be seen in fig. 8.
4.5 RELATED WORKS
A truly intelligent agent must develop some sort of general competence that allows it to combine primitive skills to master a range of tasks not only a single task associated with a specified reward function. The major part of such competence come from unsupervised experiences. Animals use a similar competence to quickly adapt to new environments (Weng et al., 2001). and function efficiently soon after birth before being exposed to massive supervised experience (Zador, 2019). Due to its generality, such basic skills can be inherited over generations rather than being learned from scratch (Gaier & Ha, 2019). Despite traditional RL that the learning is driven by an extrinsic reward signal, intrinsically motivated RL concerns task-agnostic learning. Similar to animals’ babies (Touwen et al., 1992), the agent may undergo a developmental period in which it acquires reusable modular skills (Kaplan & Oudeyer, 2003; Weng et al., 2001) such as curiosity and confidence (Schmidhuber, 1991a; Kompella et al., 2017). Another aspect of such general competence is the ability of the agent to remain safe during its learning and deployment period (Garcıa & Fernández, 2015). In physical systems especially continuous control, stability is a major aspect of safety that implies states of the system converge to some invariant sets or remain within a certain bound (Lyapunov, 1992). Control theory often assumes the model of the system known to guarantee stability (Khalil, 2002). In the absence of the model, model-based RL learns the model along with
the policy. Hence, learning the transition model to predict the states in the future can be another intrinsic reward.
From a technical point of view, our work is relevant to sensitivity analysis and how it is used to train the parameters of models such as in Chen et al.’s NeuralODE. The method seemed to be effective in many tasks including learning dynamics (Rudy et al., 2019) , optimal control (Han et al., 2018), and generative models (Grathwohl et al., 2018). Our method can be seen as a mode-free sensitivity analysis in real-world systems. In NeuralODE, the gradient with respect to the parameters requires solving ODEs for both states and adjoint states that require a transition model. Since we are working directly on the physical system, we don’t need to calculate the integrals forward in time. The systems itself acts as a physical ODE solver. We refer to appendix F for a more detailed review of the related works.
5 CONCLUSION
In this paper, we present a method to learn the way that the trajectories of a physical real world dynamical system changes with respect to a change in the policy parameters. We tested our method on a custom-built platform called finger robot that allows testing a couple of controllers with various settings to show the applicability of our method for linear, nonlinear, open-loop, and feedback controllers. By estimating the physical derivative function, we showed that our method is able to push a controlled trajectory towards a target intermediate state. We investigate the real-world challenges when doing a fine sensitive task such as estimating physical derivatives on a real robot and proposed solutions to make our algorithm robust to inherent imperfection and noise in physical systems. We focused mainly on low-level issues of physical derivative and showing the feasibility of estimating it robustly. We expect that physical derivatives will contribute to research areas such as safety, control with constrain satisfaction and trajectory planning, robust or safe control.
A PHYSICAL PLATFORM
In this section, we introduce the physical robot on which we tested our method. The robot is called finger platform or simply finger throughout this paper. The range of movement for the motors are [0, π], [0, π], [0, 2π] respectively. The axes of the plots throughout the paper are in radian. It consists of three articulated arms with three degrees of freedom in total (see fig. 9d). The motors {m1,m2,m3} are depicted in the figure. This naming remains consistent throughout this paper. Each arm is moved by a separate brushless DC motor and has one degree of freedom to swing in its own plane (see fig. 9a). Each arm is equipped with an encoder that measures its angle (see fig. 9b). The brushless motors are controlled by an electronic driver that receives torque values applied to each motor from a computer terminal via a CAN bus and applies the torques to the motors(see fig. 9c). Due to the imperfections of the arms, motors, and drivers, we did not use any model for the system including the inertial matrix of the robot or the current-torque characteristic function of the motors. The low-cost and safe nature of this robot makes it a suitable platform to test the idea of physical derivatives that requires applying many different controllers in the training phase.
B ADDITIONAL PLOTS ILLUSTRATING REAL WORLD CHALLENGES (SECTION 3)
C APPLICATIONS OF PHYSICAL DERIVATIVES
If we know how the states of a trajectory change as a result of a change in the policy parameters, the policy can be easily updated to push the trajectory towards a desired one. For example, assume we are interested in going from the current trajectory T (θ) to the target trajectory T ∗. The distance between these trajectories can get minimized by perturbing the policy parameters in the direction −∂‖T (θ) − T ∗‖/∂θ. This direction is already available since we have estimated ∂T (θ)/∂θ as a physical derivative. As an exemplary case, we show this application of our method in practice in section 4. Other applications of physical derivatives are in robust control and safety. In both cases, the physical derivative allows us to predict the behaviour of the system if the policy changes in a neighbourhood around a nominal policy. Then, it is possible to make sure that some performance or safety criteria will not be violated for the local perturbation in the policy. As a concrete example, for an autonomous driving system, there can be a calibration phase during which physical derivatives of the car is estimated by perturbing the controller parameters around different nominal policies which are likely to occur in real roads. The calibration must be done in a safe condition and before deploying the system. When deployed, the estimated physical derivatives can be used to predict the effect of a change of the policy on the behaviour of the system and neutralize the change if it would move the car towards unsafe regions of its state space. The command that changes the policy can be issued by a high-level controller (e.g. guidance system), and the safety is confirmed by a low-level mechanisms through physical derivatives. This work focuses on the concept and introduction of physical derivatives and direct applications would go significantly beyond the scope of this work. In the following more detailed description of the use of physical derivatives in robust and safe control.
Robust control In control theory, robust control relates to the design of a controller whose performance is guaranteed for a range of systems and controllers belonging to a certain neighborhood around the nominal system (Zhou & Doyle, 1998). It is desired to have a controller that keeps the performance of the system at a certain good level even if the parameters of the controller are not fixed to the theoretical values. Assume the performance of the system is associated with some function of a trajectory E(T ). Changing the parameters of the controller θ results in a change in the trajectories. This allows us to compute ∂T /∂θ that consequently gives us ∂E(T )/∂θ by the chain rule. Roughly speaking, between two sets of parameters θ1 and θ2, the set of parameters that gives the least ∂E/∂θ is preferred. This means that by shaking the parameters of the controller and assessing the performance of the system, an estimate of the curvature of the landscape of E(T (θ)) is obtained. We prefer flatter regions of this space where a small change in θ does not cause a drastic change in the performance metric E .
Safety Safety refers to the situations in which the agent may hurt itself or the environment and causes irreversible damages if it freely takes arbitrary actions (Garcıa & Fernández, 2015). For a safety-critical system whose full physical models are hard to obtain, the physical gradients can assist in avoiding restricting the parameters of the robot to avoid unsafe behavior. The physical derivatives are learned in the Lab environment before the robot is deployed into the wild. For example, a rover whose mission is to safely explore an unknown environment often enjoys a learning loop that allows it to adapt to the new environment. Even though the learning in the new environment requires sufficient exploration, the physical derivatives can be used to give a rough simulation of the robots next few states under a given update to its parameters. The potential harmful updates might be detected by such simulation and be avoided.
D EXTENDED SET OF SOLUTIONS TO THE REAL WORLD CHALLENGES
D.1 DETECTING ZERO CROSSING
In this method, we take advantage of special landmarks in the trajectories. The landmarks are typically caused by physical constraints of the system. For example, when a robot’s leg touches the ground, the velocity of the leg becomes zero. Likewise, when a joint reaches its physical limit, the velocity of the connected arm to the joint becomes zero or changes sign. In both cases, a zero crossing occurs that can be used as a landmark to synchronize lagged trajectories with a reference trajectory. Even though this method will eliminate the temporal noise, it requires the presence of such landmarks along the trajectories. Notice that from a mathematical point of view, there is nothing special about zero. We can pick any value of states along a reference trajectory and synchronize all other trajectories with respect to it. However, in practice, physical landmarks are easier to detect and have less ambiguity that consequently gives a more accurate synchronization.
E EXPERIMENTAL DETAILS
Starting position in all the experiments is (π2 , π 2 , π). Task’s overall details are as following:
Task number of trajectories timesteps Linear (N) 640 1500 PD controller(N) 640 1500 PD controller(U) 1000 1500 Sine 1 joint(N) 640 5000 Sine 1 joint(U) 1000 5000 Sine 2 joints(U) 640 5000 Sine 2 joints(N) 1000 5000
In normal sampling cases, we ran 10 simulations for each set of λ parameters which indicates noise level.
E.1 LINEAR
uit = wit+ bi for i = 1, 2, 3 (22)
E.1.1 GAUSSIAN SAMPLING
wi = Wi + w,i for i = 1, 2, 3
w,i ∼ N(0, ew × ‖Wi‖2) ew ∼ exp(λw) for λw = 1, 5, 10, 50, 100, 500, 1000, 5000
bi = Bi + b,i for i = 1, 2, 3
b,i ∼ N(0, eb × ‖Bi‖2) eb ∼ exp(λb) for λb = 1, 5, 10, 50, 100, 500, 1000, 5000
W = [0.00001, 0.0001,−0.00001], B = [−0.28,−0.15,−0.08]
E.2 PD CONTROLLER
Final destination is ( π10 , 3 π 4 , 7 π 12 )
E.2.1 GAUSSIAN SAMPLING
kp = KP +
kp ∼ N(0, ekp × ‖KP‖) ekp ∼ exp(λkp) for λkp = 1, 5, 10, 50, 100, 500, 1000, 5000
kd = KD +
kd ∼ N(0, ekd × ‖KD‖) ekd ∼ exp(λkd) for λkd = 1, 5, 10, 50, 100, 500, 1000, 5000
E.2.2 UNIFORM SAMPLING
kp ∼ U(−0.5, 1.5),KP = 1
kd = KD = 0.01
E.3 SINE 1 JOINT
E.3.1 GAUSSIAN SAMPLING
w = W +
w ∼ N(0, ew × ‖W‖) ew ∼ exp(λw) for λw = 1, 5, 10, 50, 100, 500, 1000, 5000
a = A+
a ∼ N(0, ea × ‖A‖) ea ∼ exp(λa) for λa = 1, 5, 10, 50, 100, 500, 1000, 5000
W = 0.01, B = 0.5
E.3.2 UNIFORM SAMPLING
w ∼ U(0.005, 0.015), a = A = 0.5
E.3.3 SINE 2 JOINTS
E.3.4 GAUSSIAN SAMPLING
wi = Wi + for i = 1, 2
w,i ∼ N(0, ew × ‖W‖2) ew ∼ exp(λw) for λw = 1, 5, 10, 50, 100, 500, 1000, 5000
ai = Ai + for i = 1, 2
a,i ∼ N(0, ea × ‖A‖2) ea ∼ exp(λa) for λa = 1, 5, 10, 50, 100, 500, 1000, 5000
W = [0.01, 0.01], A = [−0.4, 0.5]
E.3.5 UNIFORM SAMPLING
wi ∼ U(0.005, 0.015) for i = 1, 2, a = A = 0.5
E.4 GP SCORE:
Definition of the GP score: The score is defined as (1−u/v), where u is the residual sum of squares Σ(ytrue−ypred)2 and v is the total sum of squares Σ(ytrue−mean(ytrue))2. The best possible score is 1.0.
E.5 ZERO-SHOT PLANNING TASK:
For the task of section 4.4: Number of training trajectories: 100 each with 1500 time steps
Kd = 0.01
Kp = Uniformly sampled from [0.2, 0.6]
Initial point: X◦ = [π/2, π/2, π])
desired position = [π/10, 3 ∗ π/4, 7 ∗ π/12]
F DETAILED LITERATURE REVIEW
There has been a recent surge of interest in unsupervised methods in reinforcement learning when a task-specific reward function is not the only driving force to train the agent (Baranes & Oudeyer, 2013; Bellemare et al., 2016; Gregor et al., 2016; Hausman et al., 2018; Houthooft et al., 2016). A truly intelligent agent must behave intelligently in a range of tasks not only in a single task associated with its reward function. This requires the agent to develop some sort of general competence that allows it to come up with solutions to new problems by combining some low-level primitive skills. This general competence is a key factor in animals to quickly and efficiently adapt to a new problem (Weng et al., 2001). By calling the traditional RL, extrinsicially motivated RL, the new framework is called intrinsically motivated RL. There have been many ideas in this line with various definitions for the terms motivation and intrinsic. Some researchers assume a developmental period in which the agent acquires some reusable modular skills that can be easily combined to tackle more sophisticated tasks (Kaplan & Oudeyer, 2003; Weng et al., 2001). Curiosity and confidence are other unsupervised factors that can be used to drive the agent towards unexplored spaces to achieve new skills (Schmidhuber, 1991b; Kompella et al., 2017). Interestingly, there are observations in neuroscience that dopamine, a known substance that controls one’s motivation for extrinsic rewards, is also associated with intrinsic properties of the agent such as novelty and curiosity. A novel sensory stimulus activates the dopamine cells the same way they are activated by extrinsic reward. Children build a collection of skills accumulatively while they engage in activities without a specific goal, e.g., hitting a ball repeatedly without a long-term target such as scoring a goal. The achieved skills contribute to their stability while handling objects (Touwen et al., 1992).
<latexit sha1_base64="bCZFEUWsQlA6jPW7lSpTBjLMvvo=">AAACEHicbVC7SgNBFJ2NrxhfUUubwSDGwrAbBW2EoBaWEcwDsiHMTmaTIbOz68xdJSz7CTb+io2FIraWdv6Nk0eh0QMXDufcy733eJHgGmz7y8rMzS8sLmWXcyura+sb+c2tug5jRVmNhiJUTY9oJrhkNeAgWDNSjASeYA1vcDHyG3dMaR7KGxhGrB2QnuQ+pwSM1MnvuzLGZ9j1FaFJt+jqWwWJ46T4EJcP0sS9ZAIIvk87+YJdssfAf4kzJQU0RbWT/3S7IY0DJoEKonXLsSNoJ0QBp4KlOTfWLCJ0QHqsZagkAdPtZPxQiveM0sV+qExJwGP150RCAq2HgWc6AwJ9PeuNxP+8Vgz+aTvhMoqBSTpZ5McCQ4hH6eAuV4yCGBpCqOLmVkz7xGQDJsOcCcGZffkvqZdLzlGpfH1cqJxP48iiHbSLishBJ6iCrlAV1RBFD+gJvaBX69F6tt6s90lrxprObKNfsD6+AV2Om4o=</latexit>
Another line of work concerns the fundamental constraints of the agent/environment and ensures those constraints are met while learning. For example, in many practical systems, learning episodes must halt if the system is likely to undergo an irreversible change. For example, the training episodes of a fragile robot must ensure the robot does not fall or will not be broken in any circumstance while acting under a certain policy. The general name safe RL embodies ideas to tackle such issues in current interactive learning algorithms (Garcıa & Fernández, 2015). One major aspect of safety is stability that loosely means that states of the system converge to some invariant sets or remain within a certain bound (Lyapunov, 1992). Control theory enjoys a physical model of the system to guarantee stability (Khalil, 2002). When the physical model is not known in advance, the model is either learned along with the policy (model-based RL) or will be implicitly distilled in the value function (model-free RL) (Sutton & Barto, 2018). Stability can be categorized as an intrinsic motivation for the agent. No matter what task the agent aims to solve, it must remain stable all the time. Learning the transition model which is the major concern of model-based RL can also be seen as intrinsic motivation. The agent learns to predict the future step given the current state. The advantage of learning a model—even inaccurately—is twofold: the agent would know where to go and where not to go. It knows which regions of the state space is unsafe to explore and must be avoided. It also knows which regions are unexplored and might be informative to improve the model. This brings us to another view to intrinsic reward that encourages diversity.
Our work is also relevant to sensitivity analysis and its use in trainig the parameters of dynamical models. After Chen et al.’s NeuralODE on training neural networks by sensitivity analysis of the network parameters, the method was successfully applied to various tasks such as learning dynamics (Rudy et al., 2019) , optimal control (Han et al., 2018), and generative models (Grathwohl et al., 2018). Our method can be seen as a mode-free sensitivity analysis in real-world systems. In NeuralODE, the gradient with respect to the parameters requries solving ODEs for both states and adjoint states that require a transition model. Since we are working directly on the physical system, we don’t need to calculate the integrals forward in time. The systems itself acts as a physical ODE solver.
The importance of learning from unlabelled experiences is a known fact in animals. Many animals function efficiently soon after birth before being exposed to a massive labeled experience. Part of it might be due to unsupervised learning but the major part of the story can be a genetic heritage after years of evolution that Zador called genomic bottleneck. The same idea turned out to be valid in statistical learning where an automatically discovered neural network architecture peforms surpsingly well with a shared random weight (Gaier & Ha, 2019). The embedded inductive bias in the neural network architectures could be analogous to the wiring of the brain of animal babies which transfers from generation to generation by genes.
F.1 PROOFS
Proof to the lemma on voxelization error.
Proof. The voxels become boxes in 3D as in fig. 15. The gradient is estimated as the distance between two points in 3D coordinates. Hence the source of voxelization error is approximating the distance between two points in 3D with the distance between the centers of the corresponding boxes to which those points belong. This error is written next to the boxes in fig. 15. The maximmum error is inversely proportional to the distance between voxels. Meaning that the voxels which are located far away will induce less voxelization error. This is intuitively clear. When two points are too distant from each other, a slight change in their position would not change the distance between them considerably. The upper bound on the error, however, occurs for a single voxel where the error is bounded by the size of the voxel.
G MORE RESULTS
In this section, the results of the extra experiments that were eliminated from the main text due to the space limit are presented.
The following figures show GP models trained by a set of directional derivatives collected during the shaking phase. The results are provided for the experiments of sections 4.2 and 4.3.
T = 2350 T = 3350 T = 3850
T = 270 T = 390 T = 500 T = 830
T = 70 T = 110 T = 210
T = 1090 T = 1210 T = 1330 T = 1440
T = 250 T = 310 T = 360T = 190
T = 210 T = 280 T = 460 T = 710
T = 270 T = 370 T = 420 T = 550
T = 2450 T = 3350
T = 1250 T = 1550 T = 1850 T = 2150
T = 1250 T = 1550 T = 1850 T = 2150
T = 50 T = 350 T = 650 T = 950 | 1. What are the strengths and weaknesses of the proposed method for learning a local gradient useful for policy optimization?
2. How does the method compare to other approaches in the field, such as random search/evolutionary computation and model-free RL?
3. What are the limitations of the experimental evaluation provided in the paper?
4. How can the authors improve the precision and clarity of their ideas, particularly in the context of related work and experimental evaluation?
5. Are there any concerns regarding the use of voxelization in the method, and how might the authors address these concerns?
6. What additional experiments or simulations could the authors perform to further investigate the convergence, bias, variance, and reproducibility of the gradient model?
7. Are there any suggestions for improving the readability and presentation of the paper, such as addressing spelling errors, increasing font sizes in figures, and reordering figures to match their introduction in the text? | Review | Review
*Summary of paper*
This paper investigates the use of random perturbations applied to a robotic policy to learn a local gradient useful for policy optimization. The method aims to learn a policy directly on a real physical robotic system, bypassing both simulation models and model-free RL. Training pairs are gathered by perturbations of a starting policy, and the "gradient" is captured in a probabilistic model learned from the training data. The paper includes experiments on a custom 3-DOF robotic platform.
*Decision*
I vote for rejecting this paper. While the idea is interesting, the paper lacks precision in key areas and the method is not placed in context among related work. Further, it fails to communicate key ideas (particularly in the experiments) to a non-robotics reader. Without sufficient clarity and background, it is not suited to a general machine learning conference.
- Lemma 3, which attempts to justify the use of voxelization, and its proof are both imprecise and inadequate. To improve precision, please define "error causes by voxelization" in mathematical terms, e.g. ||c_i - x_i||. Also, while the statement of the lemma un-intuitively implies that larger voxels introduce smaller errors, the proof seems to say that larger errors will result for smaller gradients if larger voxels are used.
- Related work: How does this work relate to random search/evolutionary computation? How does it compare to performing those methods or a model-free RL method directly on the robot? How does it compare to learning using an inaccurate model for robot dynamics? Presumably there are numerous methods that have been tried in this area, so further context is needed.
- The evaluation is unclear, at least to a non-expert in robotics. A lack of quantitative evaluation further exacerbates this issue: nearly all experiments, even those with associated plots, are characterized qualitatively and without reference the performance of related methods.
- In addition to addressing the limitations above, I would encourage the authors to consider the use of experiments in simulation to thoroughly and quantitatively investigate the convergence/bias/variance of the gradient model w.r.t. #DoF of the robot, length of the trajectory, voxelization, # sampled trajectories, perturbation sampling method, and robot reliability/reproducibility
*Additional feedback*
- spelling errors throughout; please check thoroughly
- the captions/labels/etc. in most figures is far too small to read in a printed copy of the paper
- What is the intuition for the "empirical distribution p_e(T|\pi) = ..." on page 2? Is it counting the exact matches between the trajectory T and the M observed trajectories? (This may be more clear in the context of voxelization introduced later.)
- Figure 3: what are the units for \gamma? what is the time step?
- many of the figures are out of order w.r.t. their introduction in the text |
ICLR | Title
Learning by shaking: Computing policy gradients by physical forward-propagation
Abstract
Model-free and model-based reinforcement learning are two ends of a spectrum. Learning a good policy without a dynamic model can be prohibitively expensive. Learning the dynamic model of a system can reduce the cost of learning the policy, but it can also introduce bias if it is not accurate. We propose a middle ground where instead of the transition model, the sensitivity of the trajectories with respect to the perturbation (shaking) of the parameters is learned. This allows us to predict the local behavior of the physical system around a set of nominal policies without knowing the actual model. We assay our method on a custom-built physical robot in extensive experiments and show the feasibility of the approach in practice. We investigate potential challenges when applying our method to physical systems and propose solutions to each of them. (a) (b) (c) (d) Figure 1: Physical finger platform in action with different policies.
1 INTRODUCTION
Traditional reinforcement learning crucially relies on reward(Sutton & Barto, 2018). However, reward binds the agent to a certain task for which the reward represents success. Aligned with the recent surge of interest in unsupervised methods in reinforcement learning (Baranes & Oudeyer, 2013; Bellemare et al., 2016; Gregor et al., 2016; Hausman et al., 2018; Houthooft et al., 2016) and previously proposed ideas (Schmidhuber, 1991a; 2010), we argue that there exist properties of a dynamical system which are not tied to any particular task, yet highly useful, and their knowledge can help solve other tasks more efficiently. This work focuses on the sensitivity of the produced trajectories of the system with respect to the policy so called Physical Derivatives. The term physical comes from the fact that it uses the physics of the system rather than any idealized model. We learn a map from the directions in which policy parameters change to the directions in which every state of the trajectory changes. In general, our algorithm learns the Jacobian matrix of the system at every time step through the trajectory. The training phase consists of physically calculating directional derivatives by the finite difference after applying perturbed versions of a nominal policy (a.k.a. controller). Perturbing the parameters of the controller is the reason for naming our method shaking. The test phase uses these directional derivatives to compute derivatives along unseen directions. Due to the difficulty of computing the Jacobian matrix by the finite difference in higher dimensions, we use random controllers joint with probabilistic learning methods to obtain a robust estimate of the Jacobian matrix at each instant of time along a trajectory. We are capable of this generalization to unseen perturbations because the trajectories of physical systems live on an intrinsic low-dimensional manifold and change slowly with the small changes in the parameters of the system (Koopman, 1931). This assumption holds as long as the system is not chaotic or close to a bifurcation condition (Khalil, 2002).
1.1 PRELIMINARIES
A reward function describes how close the agent is to the solution of the target task. In the absence of the reward, the agent will be given no means to find its way towards the solution. Let x ∈ X ⊆ Rd be a d-dimensional state vector that fully describes the environment with which the agent interacts. At each state, the agent is allowed to take action u ∈ U ⊆ Rq from a q-dimensional action space via a parameterised policy function u = π(x;θ). The agent will be rewarded r(x,u) by the function r : X × U → R when it takes action u at state x. The goal of learning is to update θ such that some desired target is achieved. The target can be anything as long as a concrete reward function is associated with it. In stochastic cases, return R : Π(Θ) → R is defined as a cumulative future discounted reward whose expectation is often of main interest. For parametric policies, the space of feasible parameters Θ has a one-to-one correspondence to the policy space Π. The agent who takes on the policy π from state x0 produces the trajectory T ∈ T where T is the space of possible trajectories. For a return function R : T→ R, the expected return becomes a function of the policy as J(πθ) = ET {R(T )} where the expectation is taken with respect to the probability distribution P (T |πθ). There exist two major classes of approaches in reinforcement learning: value-based methods and value-free methods. In the first class, a surrogate function is defined to approximate the value of either a state V (x) or a state-action pair Q(x,u). The policy is updated such that the agent tends towards states with higher values. The value-free methods update the policy directly without any need for an auxiliary function such as V or Q. This paper mainly concerns the second class. The policy parameters are updated as
θt+1 = θt + α ∂J(πθ)
∂θ ∣∣∣∣ θ=θt
(1)
and the gradient ∂J(πθ)/∂θ is written as
∂J(πθ)
∂θ = ∫ T ∂p(T |πθ) ∂θ R(T ) dT (2)
which is normally difficult to compute in practice. As can be seen in eq. (2), the integrand of the r.h.s. consists of two terms. The second term R(T ) is the return which is defined according to the target task. Hence, this term is task-dependent. The first term ∂p(T |πθ)/∂θ though shows how the trajectories change with respect to a change in the policy. Notice that there is no notion of reward or any task-dependent quantities in this term. For an empirical distribution pe(T |π) = 1 M ∑M i=1 δ(T − T (i)), the dependence of partial derivative of the distribtion of T on the partial derivative of T can be explicitely derived as
∂pe(T |πθ) ∂θ = 1 M M∑ i=1 u1(T − T (i)) ∂T ∂θ
(3)
where u1 is the unit doublet function (derivative of the Dirac delta function). This examplary distribution makes it clear that the change in the distribution of trajetories relates to the change of the trajectories themselves. As an unsupervised object, ∂T /∂θ is of main interest in this paper.
1.2 PHYSICAL DERIVATIVE
In this paper, we investigate the feasibility of learning a less explored unsupervised quantity, the so called Physical Derivative which is computed directly from the physical system. In abstract terms, we perturb the policy and learn the effect of its perturbation on the resulting trajectory. The difference from traditional RL whose algorithms are based on eq. (1) is the absence of a specified reward function. Instead, we generate samples from ∂p(T |πθ)/∂θ of eq. (2) that makes it possible to compute ∂J(πθ)/∂θ for an arbitrary return function R. If the exact model of the system is known, control theory has a full set of tools to intervene in the system with stability and performance guarantees. When the system is unknown, one could identify the system as a preliminary step followed by a normal control synthesis process from control theory (Ljung, 2001). Otherwise, the model and the policy can be learned together in a model-based RL (Sutton, 1996) or in some cases adaptive control (Sastry & Bodson, 2011). We argue that learning physical derivatives is a middle ground. It is not model-based in the sense that it does not assume knowing the exact model of the system. Rather, it knows how the trajectories of the system change as a result of perturbing the policy
parameters. This differential information of the system has applications in many downstream tasks. This work focuses on the concept and introduction of physical derivatives and direct applications would go significantly beyond the scope of this work. Few potential applications are discussed with more details in appendix C.
Our contributions— In summary, the key contributions of the current paper are as follows:
• A method to generate training pairs to learn the map from the policy perturbations to the resulting changes in the trajectories.
• Learning the above map as a probabilistic function and showing that it generalizes to unseen perturbations in the policy.
• Use the inverse of the above map to perturb the policy in the desired direction to achieve certain goals without conventional RL methods.
• Use a physical custom-built robotic platform to test the method and propose solutions to deal with the inherent issues of the physical system to ensure the practicality of the method (see fig. 1 for images of the platform and and appendix A for technical details).
• The supplementary materials for the paper, including code and the videos of the robot in action can be found in https://sites.google.com/view/ physicalderivatives/
2 METHOD
In this section, we describe our pipeline to estimate the physical derivatives and our proposed solutions to the inevitable challenges that are likely to occur while working with a real physical robot. We are interested in ∂T /∂θ which denotes how a small change in the parameters θ of the controller results in a different trajectory produced by the system. We normally consider a finite period of time [0, T ] and the trajectory is an ordered list of states T = [x0,x1, . . . ,xT ] where the subscript shows the time step. Therefore, having ∂T /∂θ is equivalent with having ∂xt/∂θ for every t ∈ {1, . . . , T}. Notice that the initial state x0 is chosen by us. Hence we can see it either as a constant or as a changeable parameter in θ. We kept it fixed in our experiments.
Assume xt ∈ Rd and θ ∈ Rm. Hence,∇θxt = ∂xt/∂θ ∈ Rd×m where the tth row of this matrix is ∇θxit = (∂xit/∂θ)T ∈ Rm showing how the ith dimension of the state vector changes in response to a perturbation in θ. The directional derivative of xit in the direction δθ is defined as
∇δθθ xit = 〈∇θxit, δθ
|δθ| 〉. (4)
If (4) is available form linearly independent and orthonormal directions, {δθ(1), δθ(2), . . . , δθ(m)}, the directional derivative along an arbitrary δθ can be approximated by
∇δθθ xit = m∑ j=1 cj〈∇θxit, δθ(j)〉 (5)
where cj = 〈δθ, δθ(j)〉 is the coordinates of the desired direction in the coordinate system formed by the orthonormal bases.
In practice, m directions δθ(j) can be randomly chosen or can be along some pre-defined axes of the coordinate system. To compute 〈∇θxit, δθ(j)〉, the nominal policy parameters θ are perturbed by δθ(j) as θ(j) ← θ + δθ(j) and the derivative is computed as
〈∇θxit, δθ(j)〉 = lim h→0 xit(θ + hδθ (j))− xit(θ) h . (6)
This quantity is often approximated by finite difference where h takes a small nonzero value. By perturbing the parameters θ along m orthonormal directions δθ(j) and computing the approximate directional derivative by (6), ∇δθθ xit can be computed along every arbitrary direction δθ, meaning that, we can compute∇θxit by evaluating it along any direction which is the aim of this paper.
✓1 <latexit sha1_base64="b2Ff/oUFJw0eznxXS1RygRK2bZk=">AAAB73icbVBNS8NAEJ3Ur1q/qh69LBbBU0mqoMeiF48V7Ae0oWy2m3bpZhN3J0IJ/RNePCji1b/jzX/jts1BWx8MPN6bYWZekEhh0HW/ncLa+sbmVnG7tLO7t39QPjxqmTjVjDdZLGPdCajhUijeRIGSdxLNaRRI3g7GtzO//cS1EbF6wEnC/YgOlQgFo2ilTg9HHGnf65crbtWdg6wSLycVyNHol796g5ilEVfIJDWm67kJ+hnVKJjk01IvNTyhbEyHvGupohE3fja/d0rOrDIgYaxtKSRz9fdERiNjJlFgOyOKI7PszcT/vG6K4bWfCZWkyBVbLApTSTAms+fJQGjOUE4soUwLeythI6opQxtRyYbgLb+8Slq1qndRrd1fVuo3eRxFOIFTOAcPrqAOd9CAJjCQ8Ayv8OY8Oi/Ou/OxaC04+cwx/IHz+QPOtY/Q</latexit>
✓2 <latexit sha1_base64="oe3DagNbCs6bjj10ybLfZH9d5SY=">AAAB73icbVBNS8NAEJ3Ur1q/qh69LBbBU0mqoMeiF48V7Ae0oWy2m3bpZhN3J0IJ/RNePCji1b/jzX/jts1BWx8MPN6bYWZekEhh0HW/ncLa+sbmVnG7tLO7t39QPjxqmTjVjDdZLGPdCajhUijeRIGSdxLNaRRI3g7GtzO//cS1EbF6wEnC/YgOlQgFo2ilTg9HHGm/1i9X3Ko7B1klXk4qkKPRL3/1BjFLI66QSWpM13MT9DOqUTDJp6VeanhC2ZgOeddSRSNu/Gx+75ScWWVAwljbUkjm6u+JjEbGTKLAdkYUR2bZm4n/ed0Uw2s/EypJkSu2WBSmkmBMZs+TgdCcoZxYQpkW9lbCRlRThjaikg3BW355lbRqVe+iWru/rNRv8jiKcAKncA4eXEEd7qABTWAg4Rle4c15dF6cd+dj0Vpw8plj+APn8wfQOY/R</latexit>
✓1 <latexit sha1_base64="b2Ff/oUFJw0eznxXS1RygRK2bZk=">AAAB73icbVBNS8NAEJ3Ur1q/qh69LBbBU0mqoMeiF48V7Ae0oWy2m3bpZhN3J0IJ/RNePCji1b/jzX/jts1BWx8MPN6bYWZekEhh0HW/ncLa+sbmVnG7tLO7t39QPjxqmTjVjDdZLGPdCajhUijeRIGSdxLNaRRI3g7GtzO//cS1EbF6wEnC/YgOlQgFo2ilTg9HHGnf65crbtWdg6wSLycVyNHol796g5ilEVfIJDWm67kJ+hnVKJjk01IvNTyhbEyHvGupohE3fja/d0rOrDIgYaxtKSRz9fdERiNjJlFgOyOKI7PszcT/vG6K4bWfCZWkyBVbLApTSTAms+fJQGjOUE4soUwLeythI6opQxtRyYbgLb+8Slq1qndRrd1fVuo3eRxFOIFTOAcPrqAOd9CAJjCQ8Ayv8OY8Oi/Ou/OxaC04+cwx/IHz+QPOtY/Q</latexit>
✓2 <latexit sha1_base64="oe3DagNbCs6bjj10ybLfZH9d5SY=">AAAB73icbVBNS8NAEJ3Ur1q/qh69LBbBU0mqoMeiF48V7Ae0oWy2m3bpZhN3J0IJ/RNePCji1b/jzX/jts1BWx8MPN6bYWZekEhh0HW/ncLa+sbmVnG7tLO7t39QPjxqmTjVjDdZLGPdCajhUijeRIGSdxLNaRRI3g7GtzO//cS1EbF6wEnC/YgOlQgFo2ilTg9HHGm/1i9X3Ko7B1klXk4qkKPRL3/1BjFLI66QSWpM13MT9DOqUTDJp6VeanhC2ZgOeddSRSNu/Gx+75ScWWVAwljbUkjm6u+JjEbGTKLAdkYUR2bZm4n/ed0Uw2s/EypJkSu2WBSmkmBMZs+TgdCcoZxYQpkW9lbCRlRThjaikg3BW355lbRqVe+iWru/rNRv8jiKcAKncA4eXEEd7qABTWAg4Rle4c15dF6cd+dj0Vpw8plj+APn8wfQOY/R</latexit>
Figure 2: Gaussian (left) and uniform (right) shaking examples.
In the matrix form for x ∈ Rd, we can compute ∇δθ(j)θ x = [∇δθ (j) θ x1,∇δθ (j) θ x1, . . . ,∇δθ (j) θ xd] T in a single run by computing (6) for all d dimensions of the states. Let’s define
∆θx , [∇δθ (1) θ x,∇δθ (2) θ x, . . . ,∇δθ (m) θ x] (7)
where ∆θx ∈ Rd×m and let Λ = [δθ(1), δθ(2), . . . , δθ(m)]. Therefore, if ∆δθθ x shows the directional derivative of x along δθ, we can write it as:
∇δθθ x = ∆θx(ΛTδθ) (8)
which is only a vectoral representation of eq. (4). Even though the linear formula of eq. (8) requires only m directional derivatives, it has two major downsides. First, it does not give a clear way to incorporate more than m training directional physical derivatives. Second, the linear approximation remains valid only for very small δθ. We propose Gaussian Process (GP) as a nonlinear probabilistic function approximator (Rasmussen, 2003) to capture the maps ĝt defined as
ĝt : Θ→ X (9) ĝt(δθ) = δx (10)
where subscript t shows the function that maps δθ to the change of the states δxt at time step t. We considered distinct functions for every time step. Taking into account the commonality among the function approximators corresponding to different time steps is deferred to future research. Learning this map requires training data that comes from an initial data collection phase called shaking. Shaking refers to perturbing parameters of the controller to obtain the set of trajectories produced by the perturbed controllers.
The perturbation can be either regular or stochastic. Stochastic perturbations have the advantage over regular perturbations that the agent does not need to be worried about perturbing the parameters in a particular direction. Besides, in some cases, perturbing the parameters of the policy in certain directions is infeasible. We propose two methods of shaking called Gaussian and Uniform shaking.
Gaussian shaking— Likely values of θ create nominal policies encoded by {θ(1),θ(2), . . . ,θ(m)}. We put Gaussian distributions centered at each of the nominal values resulting in a mixture of Gaussians. To reduce the hyper-parameters, we assume the variances of the Gaussians are themselves sampled from an exponential distribution making sure they all take positive values (See fig. 2 left). Here, we manually choose a reasonable value for the rate parameter of the exponential distribution. Doing inference on the hyper-parameters of the sampling distributions can be a topic for future research especially in active learning for a more clever less costly sampling stratgey.
Uniform shaking— In this setting, the state space of the changeable parameters of the policy is discretized and a uniform distribution is assumed around each value of this grid with some overlapping with the neighboring cells (See fig. 2 right).
We show the effect of each of these sampling methods later in section 4. We observed that the results are less sensitive to the hyper-parameters of the uniform sampling than Gaussian sampling. A carelessly chosen rate for the exponential distribution that generates the variances of the Gaussians in Gaussian sampling can result in too local or global sampling that gives rise to a large variance or bias in the estimated gradients.
3 REAL WORLD CHALLENGES
In this section, we present two major low-level challenges that are common when dealing with physical systems. There exist inherent noise and imperfection in the system that results in a change in the produced trajectories while the policy parameters are kept fixed. In our finger platform, we observed two different major sources of noise which are likely to occur in other physical systems too. We call them temporal and spatial noise for the reasons that come in the following.
Temporal noise. The temporal noise represented by n affects trajectories by shifting them in time xt ← xt + n for t = 0, 1, . . . , T. (11)
Notice that the absence of subscript t in n shows that this noise is not time-dependent, i.e., the time shift does not change along the trajectory as time proceeds.
Spatial noise. The trajectories affected by spatial noise cannot be aligned with each other by shifting forward or backward in time. We can model this noise as a state-dependent influence on the state of the system at every time step.
xt ← xt + nxt (12)
The following definition makes the distinction more concrete. Definition 1. Consider two trajectories T (1)(t) and T (2)(t) as two temporal signals. Assume St◦ is the shift-in-time operator defined as
St◦T (t) = T (t+ t◦) (13) for an arbitrary function of time T (t). We say T (2)(t) is temporally noisy version of T (1)(t) if
∃t◦ ∈ R s.t. ‖T (2) − St◦T (1)‖1 ≤ (14) where is a hyper-parameter threshold that reflects our prior confidence about the accuracy of the motors, joints, physical and electrical elements (in general construction process) of the robot. On the other hand, T (2) is called a spatially noisy version of T (1) if
@t◦ ∈ R s.t. ‖T (2) − St◦T (1)‖1 ≤ (15)
3.0.1 SOLUTION TO TEMPORAL NOISE
Fortunately, this type of noise is not state-dependent by definition. If we find out how much a trajectory is shifted in time with respect to another trajectory, we can simply shift the trajectory for those many time steps and compensate for the delay. Hence, the problem becomes detecting the lagged trajectories with respect to a reference trajectory and also estimate the amount of the required time shift to compensate for the delay. We can either use physical landmarks in the trajectories to align them or use the correlation between them as a measure of alignment. The later gave better results, hence, we postpone the description of the former to the appendix D.1.
Correlation-based delay estimation In this method, we use the correlation between zero-meaned trajectories T (i) and T (j) to check if one is the lagged version of the other one. The delay τ is found by
τ∗ = argmax τ T−τ∑ t=0 〈Sτx(i)t ,x(j)t 〉 (16)
where Sτ is a shift-operator by τ ∈ Z time steps. In practice, we take one trajectory of {T (1), T (2), . . . , T (M)}, e.g. T (r) as the reference and synchronize other trajectories with respect to it using eq. (16). The trajectories must be initially normalized to avoid trivial solutions where every trajectory is pushed towards the larger parts of the reference trajectory. For illustrative purposes, the plots of fig. 14 show a sample of the lagged trajectory from the finger platform and its correction by the above method.
3.1 SOLUTION TO SPATIAL NOISE
The spatial noise can be a stochastic function of the actuator, environmental change, and electronic drivers. In a perfect model of the transition dynamics xt+1 = f(xt,ut), applying the same control sequence {u0,u1, . . . ,uT−1} always results in the same sequence of states {x1,x2, . . . ,xT } when it starts from the same initial state x0. This assumption is often violated in physical systems as different runs of the same policy may result in different trajectories as can be seen in fig. 10 in the Appendix. The noise in the dynamics can be any function of states, input, and time. Therefore, it is difficult to model this noise since it requires a prohibitively large number of random experiments. The good news is that if the physical system is built properly, the effect of this noise is expectedly low. Based on our observations from the finger platform, we can assume the following.
Assumption 2. Limit on the physical noise: Let’s the control sequence U = {u0,u1, . . . ,uT−1} be applied to the system M times resulting in multiple sequence of states T (1), T (2), . . . , T (M). There exists a relatively small ζ such that
‖T (i) − T (j)‖∞ ≤ ζ for every i, j ∈ {1, 2, . . . ,m}. (17)
The word relatively here means that the change of the trajectory due to the inherent physical noise of the system must be small compared to the change of the trajectories when the parameters of the policy are perturbed.
To reduce the sensitivity of the estimated gradient to this unwanted spatial noise, we divide the state space of the physical system into regularly located adjacent cells called voxels. Each voxel vox(c) is represented by its center c and is defined as
vox(c) = {x ∈ X | ‖x− c‖∞ ≤ γ} (18) where γ is the parameter of the voxelization. The concept of the voxel is roughly used as a superstate. Every state that ends up within vox(c) gives rise to the same superstate. After recording the trajectories form the robot, every state is mapped to the center of the voxel it belongs to as
c← x for x ∈ vox(c) (19) After voxelization, we work with c instead of x. For example, all the gradients of (7) are computed as ∇θc rather than ∇θx. To illustrate the positive effect of voxelization of the state space, it can be seen in fig. 3 that increasing the voxel size improves the overlapping between two trajectories that deviate from each other due to the inherent spatial noise of the system not because of perturbing the parameters of the policy, but because of the inherent imperfection of the mechanical and electrical components of the system. This benefit comes with a cost which is the error introduced by voxelization. Fortunately, this error is bounded due to the following lemma Lemma 3. The error caused by voxelization is bounded and inversely proportional to the size of each voxel (see appendix F.1 for a brief proof).
After dealing with the challenge of inherent noise, we pursue the main goal of this paper which is estimating ∂T /∂θ directly from the physical system. In the following, we investigate the use of the different type of controllers to emphasize the extent of applicability of the proposed method.
4 EXPERIMENTS
In this section, we show how physical derivatives can be estimated in practice through several experiments. Notice that our work is different from computing gradients around the working point of a system by finite-difference. We aim to collect samples from such gradients by perturbing a grid of nominal values of the policy parameters and then generalize to unseen perturbations by Gaussian process as a probabilistic regression method. The experiments are designed to show each challenge separately and the efficacy of our proposed solution to it. Due to space constraints, details to the physical platform can be found in section A in the Appendix. See1 for videos of the robot while collecting data for different experiments and more backup materials.
1https://sites.google.com/view/physicalderivatives/
m 3
T = 10 T = 40 T = 80 T = 200
T = 200 T = 400 T = 600 T = 800
4.1 LINEAR OPEN-LOOP CONTROLLER
As a simple yet general policy, in this section, we consider an open-loop controller which is a linear function of time. The policy ut = [u1t, u2t, u3t] constitutes the applied torques to the three motors {m1,m2,m3} of the system and is assigned as
uit = wit+ bi for i = 1, 2, 3 (20)
Notice that the torque consists of two terms. The first term wit grows with time and the second term remains constant. The controller has 6 parameters in total denoted by θ. The task is to predict∇θxt for every t along the trajectory. In the training phase, the training data is obtained via shaking as described in section 2.
fig. 7 shows examples of nominal trajectories + trajectories produced by the perturbed controller and the computed derivatives. The arrows are plotted as they originate from the perturbed trajectories only for easier distinction. Each arrow corresponds to the change of the states at a certain time step on the source trajectory as a result of perturbing the policy. Each figure corresponds to a pair of nominal values of {w, b} for the linear open-loop controller. See fig. 29 for examples.
4.2 NONLINEAR OPEN-LOOP CONTROLLER
Physical derivatives can naturally be computed for either linear or nonlinear controllers which makes it different from taking the gradient of models through time. In model-based methods, if the model’s transition dynamics is not differentiable, taking the derivative is theoretically challenging. However, our method is taking advantage of the real physics of the system to compute the gradients regardless of whether the approximating model is differentiable or not. To elaborate more on this, we test our method for a simple but nonlinear policy, i.e., ut = A sin(ωt). The sinusoidal torque is applied
to either one or two motors of the system to investigate the performance of our method. We tested Gaussian and uniform shaking for θ = {A, ω} as parameters of this controller. The GP interpolation for the partial derivatives at some time instances along the trajectory can be seen in fig. 6 and more extensively in figs. 16 to 18 in the Appendix. One might be interested in the direction of the predicted derivative instead of its exact size. To this end, we take several test perturbations for every time step and use cos(α) as a measure of alignment between the predicted and ground-truth derivative vectors. The time evolution of the histogram of this measure along the trajectory shows a better alignment as time proceeds. This effect can be seen in figs. 27 and 28. This confirms our observation of initial transient noise in the system that dies out gradually by the progression of time. The overall performance of our method in predicting physical derivatices in unseen directions for two different shaking methods is shown in appendix E.
4.3 FEEDBACK CONTROLLER
Often in practice, the policy incorporates some function of the states of the system. Some wellknown examples which have been extensively used in control applications are P, PD, PI and PID controllers. Here, we consider two member of this family, i.e., P and PD controllers. The policy becomes u = Kpe for P controllers and u = Kpe + Kdė for PD controllers. The error e shows the difference between the current state x and the desired state x∗. The parameters of the controller {Kp,Kd} are scalar values that are multiplied by the error vector element wise. This implies that the controller parameters are the same for three motors leaving the controller of the whole platform with two parameters that weights the value and the rate of the error. We applied the uniform and Gaussian shaking for the set of parameters θ = {Kp,Kd} with different scenarios. The GP interpolation for the physical derivatives at some time instances along the trajectory can be seen in fig. 6 and more extensively in figs. 19 to 24 in the Appendix. The time evolution of the histogram of misalignment between predicted and ground-truth directional derivatives (see figs. 25 and 28 in the appendix) once again confirms the existence of the initial transient noise as was also observed in the section 4.2. Similar to the sinusoidal experiment, the overall performance of our method is presented in appendix E.
m 3
4.4 ZERO-SHOT PLANNING TASK
Our previous experiments in sections 4.1,4.2 and 4.3 showed that learning the physical derivative map is feasible for various types of controllers. In this section, we demonstrate an example of a constrain satisfaction task by means of the physical derivative map. In this experiment, the superscript (s) corresponds to the nominal trajectory which is called source. Assuem the system is controlled by a PD controller to reach a target state x∗, i.e., the control torques are designed as u = k
(s) p (x − x∗) + k(s)d ẋ. The controller does a decent job to reach the target state given rea-
sonable values for kp and kd. However, such controller does not give us a clear way to shape the trajectory that starts from x◦ and ends at x∗. Assume it is desired that the nominal controlled trajectory T (s) passes through an intermediate state x∗t at time t on its way towards the target state x∗ (we can equally assume that the system must avoid some regions of the state space because of safety reasons). The solution with physical derivatives is as follows . Assume k(s)d is fixed and only k(s)p is changeable. If the physical derivatives map is available, we have access to ĝt(k ∗ p − k(s)p ) = (x∗t − x(s)t )/(k∗p − k(s)p ). By simple algebraic rearrangement, we have
k∗p = x∗ − x(s)t
ĝt(k∗p − k(s)p ) + k(s)p . (21)
The new parameter of the policy is supposed to push the source trajectory T (s) towards a target trajectory T ∗ that passes through the desired state x∗t at time t. The result of this experiment on our physical finger platform can be seen in fig. 8.
4.5 RELATED WORKS
A truly intelligent agent must develop some sort of general competence that allows it to combine primitive skills to master a range of tasks not only a single task associated with a specified reward function. The major part of such competence come from unsupervised experiences. Animals use a similar competence to quickly adapt to new environments (Weng et al., 2001). and function efficiently soon after birth before being exposed to massive supervised experience (Zador, 2019). Due to its generality, such basic skills can be inherited over generations rather than being learned from scratch (Gaier & Ha, 2019). Despite traditional RL that the learning is driven by an extrinsic reward signal, intrinsically motivated RL concerns task-agnostic learning. Similar to animals’ babies (Touwen et al., 1992), the agent may undergo a developmental period in which it acquires reusable modular skills (Kaplan & Oudeyer, 2003; Weng et al., 2001) such as curiosity and confidence (Schmidhuber, 1991a; Kompella et al., 2017). Another aspect of such general competence is the ability of the agent to remain safe during its learning and deployment period (Garcıa & Fernández, 2015). In physical systems especially continuous control, stability is a major aspect of safety that implies states of the system converge to some invariant sets or remain within a certain bound (Lyapunov, 1992). Control theory often assumes the model of the system known to guarantee stability (Khalil, 2002). In the absence of the model, model-based RL learns the model along with
the policy. Hence, learning the transition model to predict the states in the future can be another intrinsic reward.
From a technical point of view, our work is relevant to sensitivity analysis and how it is used to train the parameters of models such as in Chen et al.’s NeuralODE. The method seemed to be effective in many tasks including learning dynamics (Rudy et al., 2019) , optimal control (Han et al., 2018), and generative models (Grathwohl et al., 2018). Our method can be seen as a mode-free sensitivity analysis in real-world systems. In NeuralODE, the gradient with respect to the parameters requires solving ODEs for both states and adjoint states that require a transition model. Since we are working directly on the physical system, we don’t need to calculate the integrals forward in time. The systems itself acts as a physical ODE solver. We refer to appendix F for a more detailed review of the related works.
5 CONCLUSION
In this paper, we present a method to learn the way that the trajectories of a physical real world dynamical system changes with respect to a change in the policy parameters. We tested our method on a custom-built platform called finger robot that allows testing a couple of controllers with various settings to show the applicability of our method for linear, nonlinear, open-loop, and feedback controllers. By estimating the physical derivative function, we showed that our method is able to push a controlled trajectory towards a target intermediate state. We investigate the real-world challenges when doing a fine sensitive task such as estimating physical derivatives on a real robot and proposed solutions to make our algorithm robust to inherent imperfection and noise in physical systems. We focused mainly on low-level issues of physical derivative and showing the feasibility of estimating it robustly. We expect that physical derivatives will contribute to research areas such as safety, control with constrain satisfaction and trajectory planning, robust or safe control.
A PHYSICAL PLATFORM
In this section, we introduce the physical robot on which we tested our method. The robot is called finger platform or simply finger throughout this paper. The range of movement for the motors are [0, π], [0, π], [0, 2π] respectively. The axes of the plots throughout the paper are in radian. It consists of three articulated arms with three degrees of freedom in total (see fig. 9d). The motors {m1,m2,m3} are depicted in the figure. This naming remains consistent throughout this paper. Each arm is moved by a separate brushless DC motor and has one degree of freedom to swing in its own plane (see fig. 9a). Each arm is equipped with an encoder that measures its angle (see fig. 9b). The brushless motors are controlled by an electronic driver that receives torque values applied to each motor from a computer terminal via a CAN bus and applies the torques to the motors(see fig. 9c). Due to the imperfections of the arms, motors, and drivers, we did not use any model for the system including the inertial matrix of the robot or the current-torque characteristic function of the motors. The low-cost and safe nature of this robot makes it a suitable platform to test the idea of physical derivatives that requires applying many different controllers in the training phase.
B ADDITIONAL PLOTS ILLUSTRATING REAL WORLD CHALLENGES (SECTION 3)
C APPLICATIONS OF PHYSICAL DERIVATIVES
If we know how the states of a trajectory change as a result of a change in the policy parameters, the policy can be easily updated to push the trajectory towards a desired one. For example, assume we are interested in going from the current trajectory T (θ) to the target trajectory T ∗. The distance between these trajectories can get minimized by perturbing the policy parameters in the direction −∂‖T (θ) − T ∗‖/∂θ. This direction is already available since we have estimated ∂T (θ)/∂θ as a physical derivative. As an exemplary case, we show this application of our method in practice in section 4. Other applications of physical derivatives are in robust control and safety. In both cases, the physical derivative allows us to predict the behaviour of the system if the policy changes in a neighbourhood around a nominal policy. Then, it is possible to make sure that some performance or safety criteria will not be violated for the local perturbation in the policy. As a concrete example, for an autonomous driving system, there can be a calibration phase during which physical derivatives of the car is estimated by perturbing the controller parameters around different nominal policies which are likely to occur in real roads. The calibration must be done in a safe condition and before deploying the system. When deployed, the estimated physical derivatives can be used to predict the effect of a change of the policy on the behaviour of the system and neutralize the change if it would move the car towards unsafe regions of its state space. The command that changes the policy can be issued by a high-level controller (e.g. guidance system), and the safety is confirmed by a low-level mechanisms through physical derivatives. This work focuses on the concept and introduction of physical derivatives and direct applications would go significantly beyond the scope of this work. In the following more detailed description of the use of physical derivatives in robust and safe control.
Robust control In control theory, robust control relates to the design of a controller whose performance is guaranteed for a range of systems and controllers belonging to a certain neighborhood around the nominal system (Zhou & Doyle, 1998). It is desired to have a controller that keeps the performance of the system at a certain good level even if the parameters of the controller are not fixed to the theoretical values. Assume the performance of the system is associated with some function of a trajectory E(T ). Changing the parameters of the controller θ results in a change in the trajectories. This allows us to compute ∂T /∂θ that consequently gives us ∂E(T )/∂θ by the chain rule. Roughly speaking, between two sets of parameters θ1 and θ2, the set of parameters that gives the least ∂E/∂θ is preferred. This means that by shaking the parameters of the controller and assessing the performance of the system, an estimate of the curvature of the landscape of E(T (θ)) is obtained. We prefer flatter regions of this space where a small change in θ does not cause a drastic change in the performance metric E .
Safety Safety refers to the situations in which the agent may hurt itself or the environment and causes irreversible damages if it freely takes arbitrary actions (Garcıa & Fernández, 2015). For a safety-critical system whose full physical models are hard to obtain, the physical gradients can assist in avoiding restricting the parameters of the robot to avoid unsafe behavior. The physical derivatives are learned in the Lab environment before the robot is deployed into the wild. For example, a rover whose mission is to safely explore an unknown environment often enjoys a learning loop that allows it to adapt to the new environment. Even though the learning in the new environment requires sufficient exploration, the physical derivatives can be used to give a rough simulation of the robots next few states under a given update to its parameters. The potential harmful updates might be detected by such simulation and be avoided.
D EXTENDED SET OF SOLUTIONS TO THE REAL WORLD CHALLENGES
D.1 DETECTING ZERO CROSSING
In this method, we take advantage of special landmarks in the trajectories. The landmarks are typically caused by physical constraints of the system. For example, when a robot’s leg touches the ground, the velocity of the leg becomes zero. Likewise, when a joint reaches its physical limit, the velocity of the connected arm to the joint becomes zero or changes sign. In both cases, a zero crossing occurs that can be used as a landmark to synchronize lagged trajectories with a reference trajectory. Even though this method will eliminate the temporal noise, it requires the presence of such landmarks along the trajectories. Notice that from a mathematical point of view, there is nothing special about zero. We can pick any value of states along a reference trajectory and synchronize all other trajectories with respect to it. However, in practice, physical landmarks are easier to detect and have less ambiguity that consequently gives a more accurate synchronization.
E EXPERIMENTAL DETAILS
Starting position in all the experiments is (π2 , π 2 , π). Task’s overall details are as following:
Task number of trajectories timesteps Linear (N) 640 1500 PD controller(N) 640 1500 PD controller(U) 1000 1500 Sine 1 joint(N) 640 5000 Sine 1 joint(U) 1000 5000 Sine 2 joints(U) 640 5000 Sine 2 joints(N) 1000 5000
In normal sampling cases, we ran 10 simulations for each set of λ parameters which indicates noise level.
E.1 LINEAR
uit = wit+ bi for i = 1, 2, 3 (22)
E.1.1 GAUSSIAN SAMPLING
wi = Wi + w,i for i = 1, 2, 3
w,i ∼ N(0, ew × ‖Wi‖2) ew ∼ exp(λw) for λw = 1, 5, 10, 50, 100, 500, 1000, 5000
bi = Bi + b,i for i = 1, 2, 3
b,i ∼ N(0, eb × ‖Bi‖2) eb ∼ exp(λb) for λb = 1, 5, 10, 50, 100, 500, 1000, 5000
W = [0.00001, 0.0001,−0.00001], B = [−0.28,−0.15,−0.08]
E.2 PD CONTROLLER
Final destination is ( π10 , 3 π 4 , 7 π 12 )
E.2.1 GAUSSIAN SAMPLING
kp = KP +
kp ∼ N(0, ekp × ‖KP‖) ekp ∼ exp(λkp) for λkp = 1, 5, 10, 50, 100, 500, 1000, 5000
kd = KD +
kd ∼ N(0, ekd × ‖KD‖) ekd ∼ exp(λkd) for λkd = 1, 5, 10, 50, 100, 500, 1000, 5000
E.2.2 UNIFORM SAMPLING
kp ∼ U(−0.5, 1.5),KP = 1
kd = KD = 0.01
E.3 SINE 1 JOINT
E.3.1 GAUSSIAN SAMPLING
w = W +
w ∼ N(0, ew × ‖W‖) ew ∼ exp(λw) for λw = 1, 5, 10, 50, 100, 500, 1000, 5000
a = A+
a ∼ N(0, ea × ‖A‖) ea ∼ exp(λa) for λa = 1, 5, 10, 50, 100, 500, 1000, 5000
W = 0.01, B = 0.5
E.3.2 UNIFORM SAMPLING
w ∼ U(0.005, 0.015), a = A = 0.5
E.3.3 SINE 2 JOINTS
E.3.4 GAUSSIAN SAMPLING
wi = Wi + for i = 1, 2
w,i ∼ N(0, ew × ‖W‖2) ew ∼ exp(λw) for λw = 1, 5, 10, 50, 100, 500, 1000, 5000
ai = Ai + for i = 1, 2
a,i ∼ N(0, ea × ‖A‖2) ea ∼ exp(λa) for λa = 1, 5, 10, 50, 100, 500, 1000, 5000
W = [0.01, 0.01], A = [−0.4, 0.5]
E.3.5 UNIFORM SAMPLING
wi ∼ U(0.005, 0.015) for i = 1, 2, a = A = 0.5
E.4 GP SCORE:
Definition of the GP score: The score is defined as (1−u/v), where u is the residual sum of squares Σ(ytrue−ypred)2 and v is the total sum of squares Σ(ytrue−mean(ytrue))2. The best possible score is 1.0.
E.5 ZERO-SHOT PLANNING TASK:
For the task of section 4.4: Number of training trajectories: 100 each with 1500 time steps
Kd = 0.01
Kp = Uniformly sampled from [0.2, 0.6]
Initial point: X◦ = [π/2, π/2, π])
desired position = [π/10, 3 ∗ π/4, 7 ∗ π/12]
F DETAILED LITERATURE REVIEW
There has been a recent surge of interest in unsupervised methods in reinforcement learning when a task-specific reward function is not the only driving force to train the agent (Baranes & Oudeyer, 2013; Bellemare et al., 2016; Gregor et al., 2016; Hausman et al., 2018; Houthooft et al., 2016). A truly intelligent agent must behave intelligently in a range of tasks not only in a single task associated with its reward function. This requires the agent to develop some sort of general competence that allows it to come up with solutions to new problems by combining some low-level primitive skills. This general competence is a key factor in animals to quickly and efficiently adapt to a new problem (Weng et al., 2001). By calling the traditional RL, extrinsicially motivated RL, the new framework is called intrinsically motivated RL. There have been many ideas in this line with various definitions for the terms motivation and intrinsic. Some researchers assume a developmental period in which the agent acquires some reusable modular skills that can be easily combined to tackle more sophisticated tasks (Kaplan & Oudeyer, 2003; Weng et al., 2001). Curiosity and confidence are other unsupervised factors that can be used to drive the agent towards unexplored spaces to achieve new skills (Schmidhuber, 1991b; Kompella et al., 2017). Interestingly, there are observations in neuroscience that dopamine, a known substance that controls one’s motivation for extrinsic rewards, is also associated with intrinsic properties of the agent such as novelty and curiosity. A novel sensory stimulus activates the dopamine cells the same way they are activated by extrinsic reward. Children build a collection of skills accumulatively while they engage in activities without a specific goal, e.g., hitting a ball repeatedly without a long-term target such as scoring a goal. The achieved skills contribute to their stability while handling objects (Touwen et al., 1992).
<latexit sha1_base64="bCZFEUWsQlA6jPW7lSpTBjLMvvo=">AAACEHicbVC7SgNBFJ2NrxhfUUubwSDGwrAbBW2EoBaWEcwDsiHMTmaTIbOz68xdJSz7CTb+io2FIraWdv6Nk0eh0QMXDufcy733eJHgGmz7y8rMzS8sLmWXcyura+sb+c2tug5jRVmNhiJUTY9oJrhkNeAgWDNSjASeYA1vcDHyG3dMaR7KGxhGrB2QnuQ+pwSM1MnvuzLGZ9j1FaFJt+jqWwWJ46T4EJcP0sS9ZAIIvk87+YJdssfAf4kzJQU0RbWT/3S7IY0DJoEKonXLsSNoJ0QBp4KlOTfWLCJ0QHqsZagkAdPtZPxQiveM0sV+qExJwGP150RCAq2HgWc6AwJ9PeuNxP+8Vgz+aTvhMoqBSTpZ5McCQ4hH6eAuV4yCGBpCqOLmVkz7xGQDJsOcCcGZffkvqZdLzlGpfH1cqJxP48iiHbSLishBJ6iCrlAV1RBFD+gJvaBX69F6tt6s90lrxprObKNfsD6+AV2Om4o=</latexit>
Another line of work concerns the fundamental constraints of the agent/environment and ensures those constraints are met while learning. For example, in many practical systems, learning episodes must halt if the system is likely to undergo an irreversible change. For example, the training episodes of a fragile robot must ensure the robot does not fall or will not be broken in any circumstance while acting under a certain policy. The general name safe RL embodies ideas to tackle such issues in current interactive learning algorithms (Garcıa & Fernández, 2015). One major aspect of safety is stability that loosely means that states of the system converge to some invariant sets or remain within a certain bound (Lyapunov, 1992). Control theory enjoys a physical model of the system to guarantee stability (Khalil, 2002). When the physical model is not known in advance, the model is either learned along with the policy (model-based RL) or will be implicitly distilled in the value function (model-free RL) (Sutton & Barto, 2018). Stability can be categorized as an intrinsic motivation for the agent. No matter what task the agent aims to solve, it must remain stable all the time. Learning the transition model which is the major concern of model-based RL can also be seen as intrinsic motivation. The agent learns to predict the future step given the current state. The advantage of learning a model—even inaccurately—is twofold: the agent would know where to go and where not to go. It knows which regions of the state space is unsafe to explore and must be avoided. It also knows which regions are unexplored and might be informative to improve the model. This brings us to another view to intrinsic reward that encourages diversity.
Our work is also relevant to sensitivity analysis and its use in trainig the parameters of dynamical models. After Chen et al.’s NeuralODE on training neural networks by sensitivity analysis of the network parameters, the method was successfully applied to various tasks such as learning dynamics (Rudy et al., 2019) , optimal control (Han et al., 2018), and generative models (Grathwohl et al., 2018). Our method can be seen as a mode-free sensitivity analysis in real-world systems. In NeuralODE, the gradient with respect to the parameters requries solving ODEs for both states and adjoint states that require a transition model. Since we are working directly on the physical system, we don’t need to calculate the integrals forward in time. The systems itself acts as a physical ODE solver.
The importance of learning from unlabelled experiences is a known fact in animals. Many animals function efficiently soon after birth before being exposed to a massive labeled experience. Part of it might be due to unsupervised learning but the major part of the story can be a genetic heritage after years of evolution that Zador called genomic bottleneck. The same idea turned out to be valid in statistical learning where an automatically discovered neural network architecture peforms surpsingly well with a shared random weight (Gaier & Ha, 2019). The embedded inductive bias in the neural network architectures could be analogous to the wiring of the brain of animal babies which transfers from generation to generation by genes.
F.1 PROOFS
Proof to the lemma on voxelization error.
Proof. The voxels become boxes in 3D as in fig. 15. The gradient is estimated as the distance between two points in 3D coordinates. Hence the source of voxelization error is approximating the distance between two points in 3D with the distance between the centers of the corresponding boxes to which those points belong. This error is written next to the boxes in fig. 15. The maximmum error is inversely proportional to the distance between voxels. Meaning that the voxels which are located far away will induce less voxelization error. This is intuitively clear. When two points are too distant from each other, a slight change in their position would not change the distance between them considerably. The upper bound on the error, however, occurs for a single voxel where the error is bounded by the size of the voxel.
G MORE RESULTS
In this section, the results of the extra experiments that were eliminated from the main text due to the space limit are presented.
The following figures show GP models trained by a set of directional derivatives collected during the shaking phase. The results are provided for the experiments of sections 4.2 and 4.3.
T = 2350 T = 3350 T = 3850
T = 270 T = 390 T = 500 T = 830
T = 70 T = 110 T = 210
T = 1090 T = 1210 T = 1330 T = 1440
T = 250 T = 310 T = 360T = 190
T = 210 T = 280 T = 460 T = 710
T = 270 T = 370 T = 420 T = 550
T = 2450 T = 3350
T = 1250 T = 1550 T = 1850 T = 2150
T = 1250 T = 1550 T = 1850 T = 2150
T = 50 T = 350 T = 650 T = 950 | 1. What is the main contribution of the paper in terms of control and policy learning?
2. What are the strengths and weaknesses of the proposed method compared to current RL methods?
3. Do you have any concerns regarding the tasks and policies considered in the paper?
4. How does the reviewer assess the novelty and applicability of the proposed approach in real-world problems?
5. What are some suggestions for improving the paper, such as comparing the proposed method with existing methods or showing its effectiveness in more challenging environments? | Review | Review
This paper presents a method for control by estimating the gradient of trajectories w.r.t. the policy parameters by fitting a GP to a set of noisy trajectories executing the same controller. This is opposed to the majority of current RL methods that either learn a forward model or learn a policy. They argue that learning this gradient is a middle step between model-based and model-free RL. The method is shown to estimate gradients on simple policies (linear and nonlinear open-loop controllers, and a linear PD controller) for a free-space reaching robot, and update a controller to add a trajectory constraint to pass an intermediate state.
The paper does show that they can learn these derivatives on controllers from data, which is a cool proof of concept. The method to estimate gradients by “shaking” in a probabilistic way by fitting a GP to noisy trajectories is clever and interesting. But there are a few reasons why I believe this work is not ready for publication.
The paper only considers free-space reaching as a task, which is not a difficult problem as it does not have contacts. The policies considered are also very simple: an affine open-loop controller (U = Wt + B with 6 parameters), a simple nonlinear open-loop controller (U = Asin(wt) with 2 parameters) and a PD controller with 2 parameters. The motivation is not too convincing without showing some results on hard tasks: model-based RL methods work great in this setting, and are very likely to outperform the method proposed in the paper. The motivation for the proposed method avoids explicit model learning which is a similar motivation as model-free methods, so the paper should at least show that it works as a proof of concept in settings where model-free learning has some advantages, eg. environments with contacts. The paper should probably also compare to existing methods in those settings, although I understand that it might not outperform existing methods.
The results in section 4.4 which is the result of using the model to plan really shows that using the learned model to update the policy is probably not straightforward. The parameters of the PD controller that go from x_0 to x* are updated to pass a waypoint x*_t using the learned model. But in practice what this is basically doing is changing k_p to introduce a large, possibly inefficient deviation in the path from x_0 to x* that hits x*_t at time t. Directly planning for a path between x_0 to x*_t and then x*_t to x* would probably give a much cleaner path.
At a high level, the proposed method is likely to be difficult to apply on real problems because estimating the gradient of T w.r.t. pi is probably just much noisier than estimating the forward model directly, which is already a significant challenge. Perhaps one useful experiment is to somehow explicitly show how these two methods compare (eg. measure the variance of trajectory predictions of this method vs rolling out a learned forward model repeatedly).
Comments:
Equation 11 and 12 do not make sense/do not use standard notation. I suggest defining n (as a signal or a value, it is not clear at the moment) and defining a new output signal y_t instead of the x_t <- … notation. In particular, the way equation 11 is written seems to say the output x_t is a value-shifted version of the input x_t, NOT a time-shifted one.
The preliminaries section 1.1 does not discuss environment dynamics. This is significant because the paper seems to assume deterministic dynamics but this is never explicitly stated.
Voxelization as a solution to spatial noise is a bit surprising because discretizing the space throws away local gradient information, which seems valuable to the method. It would be good to understand the effect of this design decision better with an ablation.
Minor comments:
Page 9:
- constrain -> constraint
- Assuem -> Assume
- such controller -> such a controller |
ICLR | Title
Learning by shaking: Computing policy gradients by physical forward-propagation
Abstract
Model-free and model-based reinforcement learning are two ends of a spectrum. Learning a good policy without a dynamic model can be prohibitively expensive. Learning the dynamic model of a system can reduce the cost of learning the policy, but it can also introduce bias if it is not accurate. We propose a middle ground where instead of the transition model, the sensitivity of the trajectories with respect to the perturbation (shaking) of the parameters is learned. This allows us to predict the local behavior of the physical system around a set of nominal policies without knowing the actual model. We assay our method on a custom-built physical robot in extensive experiments and show the feasibility of the approach in practice. We investigate potential challenges when applying our method to physical systems and propose solutions to each of them. (a) (b) (c) (d) Figure 1: Physical finger platform in action with different policies.
1 INTRODUCTION
Traditional reinforcement learning crucially relies on reward(Sutton & Barto, 2018). However, reward binds the agent to a certain task for which the reward represents success. Aligned with the recent surge of interest in unsupervised methods in reinforcement learning (Baranes & Oudeyer, 2013; Bellemare et al., 2016; Gregor et al., 2016; Hausman et al., 2018; Houthooft et al., 2016) and previously proposed ideas (Schmidhuber, 1991a; 2010), we argue that there exist properties of a dynamical system which are not tied to any particular task, yet highly useful, and their knowledge can help solve other tasks more efficiently. This work focuses on the sensitivity of the produced trajectories of the system with respect to the policy so called Physical Derivatives. The term physical comes from the fact that it uses the physics of the system rather than any idealized model. We learn a map from the directions in which policy parameters change to the directions in which every state of the trajectory changes. In general, our algorithm learns the Jacobian matrix of the system at every time step through the trajectory. The training phase consists of physically calculating directional derivatives by the finite difference after applying perturbed versions of a nominal policy (a.k.a. controller). Perturbing the parameters of the controller is the reason for naming our method shaking. The test phase uses these directional derivatives to compute derivatives along unseen directions. Due to the difficulty of computing the Jacobian matrix by the finite difference in higher dimensions, we use random controllers joint with probabilistic learning methods to obtain a robust estimate of the Jacobian matrix at each instant of time along a trajectory. We are capable of this generalization to unseen perturbations because the trajectories of physical systems live on an intrinsic low-dimensional manifold and change slowly with the small changes in the parameters of the system (Koopman, 1931). This assumption holds as long as the system is not chaotic or close to a bifurcation condition (Khalil, 2002).
1.1 PRELIMINARIES
A reward function describes how close the agent is to the solution of the target task. In the absence of the reward, the agent will be given no means to find its way towards the solution. Let x ∈ X ⊆ Rd be a d-dimensional state vector that fully describes the environment with which the agent interacts. At each state, the agent is allowed to take action u ∈ U ⊆ Rq from a q-dimensional action space via a parameterised policy function u = π(x;θ). The agent will be rewarded r(x,u) by the function r : X × U → R when it takes action u at state x. The goal of learning is to update θ such that some desired target is achieved. The target can be anything as long as a concrete reward function is associated with it. In stochastic cases, return R : Π(Θ) → R is defined as a cumulative future discounted reward whose expectation is often of main interest. For parametric policies, the space of feasible parameters Θ has a one-to-one correspondence to the policy space Π. The agent who takes on the policy π from state x0 produces the trajectory T ∈ T where T is the space of possible trajectories. For a return function R : T→ R, the expected return becomes a function of the policy as J(πθ) = ET {R(T )} where the expectation is taken with respect to the probability distribution P (T |πθ). There exist two major classes of approaches in reinforcement learning: value-based methods and value-free methods. In the first class, a surrogate function is defined to approximate the value of either a state V (x) or a state-action pair Q(x,u). The policy is updated such that the agent tends towards states with higher values. The value-free methods update the policy directly without any need for an auxiliary function such as V or Q. This paper mainly concerns the second class. The policy parameters are updated as
θt+1 = θt + α ∂J(πθ)
∂θ ∣∣∣∣ θ=θt
(1)
and the gradient ∂J(πθ)/∂θ is written as
∂J(πθ)
∂θ = ∫ T ∂p(T |πθ) ∂θ R(T ) dT (2)
which is normally difficult to compute in practice. As can be seen in eq. (2), the integrand of the r.h.s. consists of two terms. The second term R(T ) is the return which is defined according to the target task. Hence, this term is task-dependent. The first term ∂p(T |πθ)/∂θ though shows how the trajectories change with respect to a change in the policy. Notice that there is no notion of reward or any task-dependent quantities in this term. For an empirical distribution pe(T |π) = 1 M ∑M i=1 δ(T − T (i)), the dependence of partial derivative of the distribtion of T on the partial derivative of T can be explicitely derived as
∂pe(T |πθ) ∂θ = 1 M M∑ i=1 u1(T − T (i)) ∂T ∂θ
(3)
where u1 is the unit doublet function (derivative of the Dirac delta function). This examplary distribution makes it clear that the change in the distribution of trajetories relates to the change of the trajectories themselves. As an unsupervised object, ∂T /∂θ is of main interest in this paper.
1.2 PHYSICAL DERIVATIVE
In this paper, we investigate the feasibility of learning a less explored unsupervised quantity, the so called Physical Derivative which is computed directly from the physical system. In abstract terms, we perturb the policy and learn the effect of its perturbation on the resulting trajectory. The difference from traditional RL whose algorithms are based on eq. (1) is the absence of a specified reward function. Instead, we generate samples from ∂p(T |πθ)/∂θ of eq. (2) that makes it possible to compute ∂J(πθ)/∂θ for an arbitrary return function R. If the exact model of the system is known, control theory has a full set of tools to intervene in the system with stability and performance guarantees. When the system is unknown, one could identify the system as a preliminary step followed by a normal control synthesis process from control theory (Ljung, 2001). Otherwise, the model and the policy can be learned together in a model-based RL (Sutton, 1996) or in some cases adaptive control (Sastry & Bodson, 2011). We argue that learning physical derivatives is a middle ground. It is not model-based in the sense that it does not assume knowing the exact model of the system. Rather, it knows how the trajectories of the system change as a result of perturbing the policy
parameters. This differential information of the system has applications in many downstream tasks. This work focuses on the concept and introduction of physical derivatives and direct applications would go significantly beyond the scope of this work. Few potential applications are discussed with more details in appendix C.
Our contributions— In summary, the key contributions of the current paper are as follows:
• A method to generate training pairs to learn the map from the policy perturbations to the resulting changes in the trajectories.
• Learning the above map as a probabilistic function and showing that it generalizes to unseen perturbations in the policy.
• Use the inverse of the above map to perturb the policy in the desired direction to achieve certain goals without conventional RL methods.
• Use a physical custom-built robotic platform to test the method and propose solutions to deal with the inherent issues of the physical system to ensure the practicality of the method (see fig. 1 for images of the platform and and appendix A for technical details).
• The supplementary materials for the paper, including code and the videos of the robot in action can be found in https://sites.google.com/view/ physicalderivatives/
2 METHOD
In this section, we describe our pipeline to estimate the physical derivatives and our proposed solutions to the inevitable challenges that are likely to occur while working with a real physical robot. We are interested in ∂T /∂θ which denotes how a small change in the parameters θ of the controller results in a different trajectory produced by the system. We normally consider a finite period of time [0, T ] and the trajectory is an ordered list of states T = [x0,x1, . . . ,xT ] where the subscript shows the time step. Therefore, having ∂T /∂θ is equivalent with having ∂xt/∂θ for every t ∈ {1, . . . , T}. Notice that the initial state x0 is chosen by us. Hence we can see it either as a constant or as a changeable parameter in θ. We kept it fixed in our experiments.
Assume xt ∈ Rd and θ ∈ Rm. Hence,∇θxt = ∂xt/∂θ ∈ Rd×m where the tth row of this matrix is ∇θxit = (∂xit/∂θ)T ∈ Rm showing how the ith dimension of the state vector changes in response to a perturbation in θ. The directional derivative of xit in the direction δθ is defined as
∇δθθ xit = 〈∇θxit, δθ
|δθ| 〉. (4)
If (4) is available form linearly independent and orthonormal directions, {δθ(1), δθ(2), . . . , δθ(m)}, the directional derivative along an arbitrary δθ can be approximated by
∇δθθ xit = m∑ j=1 cj〈∇θxit, δθ(j)〉 (5)
where cj = 〈δθ, δθ(j)〉 is the coordinates of the desired direction in the coordinate system formed by the orthonormal bases.
In practice, m directions δθ(j) can be randomly chosen or can be along some pre-defined axes of the coordinate system. To compute 〈∇θxit, δθ(j)〉, the nominal policy parameters θ are perturbed by δθ(j) as θ(j) ← θ + δθ(j) and the derivative is computed as
〈∇θxit, δθ(j)〉 = lim h→0 xit(θ + hδθ (j))− xit(θ) h . (6)
This quantity is often approximated by finite difference where h takes a small nonzero value. By perturbing the parameters θ along m orthonormal directions δθ(j) and computing the approximate directional derivative by (6), ∇δθθ xit can be computed along every arbitrary direction δθ, meaning that, we can compute∇θxit by evaluating it along any direction which is the aim of this paper.
✓1 <latexit sha1_base64="b2Ff/oUFJw0eznxXS1RygRK2bZk=">AAAB73icbVBNS8NAEJ3Ur1q/qh69LBbBU0mqoMeiF48V7Ae0oWy2m3bpZhN3J0IJ/RNePCji1b/jzX/jts1BWx8MPN6bYWZekEhh0HW/ncLa+sbmVnG7tLO7t39QPjxqmTjVjDdZLGPdCajhUijeRIGSdxLNaRRI3g7GtzO//cS1EbF6wEnC/YgOlQgFo2ilTg9HHGnf65crbtWdg6wSLycVyNHol796g5ilEVfIJDWm67kJ+hnVKJjk01IvNTyhbEyHvGupohE3fja/d0rOrDIgYaxtKSRz9fdERiNjJlFgOyOKI7PszcT/vG6K4bWfCZWkyBVbLApTSTAms+fJQGjOUE4soUwLeythI6opQxtRyYbgLb+8Slq1qndRrd1fVuo3eRxFOIFTOAcPrqAOd9CAJjCQ8Ayv8OY8Oi/Ou/OxaC04+cwx/IHz+QPOtY/Q</latexit>
✓2 <latexit sha1_base64="oe3DagNbCs6bjj10ybLfZH9d5SY=">AAAB73icbVBNS8NAEJ3Ur1q/qh69LBbBU0mqoMeiF48V7Ae0oWy2m3bpZhN3J0IJ/RNePCji1b/jzX/jts1BWx8MPN6bYWZekEhh0HW/ncLa+sbmVnG7tLO7t39QPjxqmTjVjDdZLGPdCajhUijeRIGSdxLNaRRI3g7GtzO//cS1EbF6wEnC/YgOlQgFo2ilTg9HHGm/1i9X3Ko7B1klXk4qkKPRL3/1BjFLI66QSWpM13MT9DOqUTDJp6VeanhC2ZgOeddSRSNu/Gx+75ScWWVAwljbUkjm6u+JjEbGTKLAdkYUR2bZm4n/ed0Uw2s/EypJkSu2WBSmkmBMZs+TgdCcoZxYQpkW9lbCRlRThjaikg3BW355lbRqVe+iWru/rNRv8jiKcAKncA4eXEEd7qABTWAg4Rle4c15dF6cd+dj0Vpw8plj+APn8wfQOY/R</latexit>
✓1 <latexit sha1_base64="b2Ff/oUFJw0eznxXS1RygRK2bZk=">AAAB73icbVBNS8NAEJ3Ur1q/qh69LBbBU0mqoMeiF48V7Ae0oWy2m3bpZhN3J0IJ/RNePCji1b/jzX/jts1BWx8MPN6bYWZekEhh0HW/ncLa+sbmVnG7tLO7t39QPjxqmTjVjDdZLGPdCajhUijeRIGSdxLNaRRI3g7GtzO//cS1EbF6wEnC/YgOlQgFo2ilTg9HHGnf65crbtWdg6wSLycVyNHol796g5ilEVfIJDWm67kJ+hnVKJjk01IvNTyhbEyHvGupohE3fja/d0rOrDIgYaxtKSRz9fdERiNjJlFgOyOKI7PszcT/vG6K4bWfCZWkyBVbLApTSTAms+fJQGjOUE4soUwLeythI6opQxtRyYbgLb+8Slq1qndRrd1fVuo3eRxFOIFTOAcPrqAOd9CAJjCQ8Ayv8OY8Oi/Ou/OxaC04+cwx/IHz+QPOtY/Q</latexit>
✓2 <latexit sha1_base64="oe3DagNbCs6bjj10ybLfZH9d5SY=">AAAB73icbVBNS8NAEJ3Ur1q/qh69LBbBU0mqoMeiF48V7Ae0oWy2m3bpZhN3J0IJ/RNePCji1b/jzX/jts1BWx8MPN6bYWZekEhh0HW/ncLa+sbmVnG7tLO7t39QPjxqmTjVjDdZLGPdCajhUijeRIGSdxLNaRRI3g7GtzO//cS1EbF6wEnC/YgOlQgFo2ilTg9HHGm/1i9X3Ko7B1klXk4qkKPRL3/1BjFLI66QSWpM13MT9DOqUTDJp6VeanhC2ZgOeddSRSNu/Gx+75ScWWVAwljbUkjm6u+JjEbGTKLAdkYUR2bZm4n/ed0Uw2s/EypJkSu2WBSmkmBMZs+TgdCcoZxYQpkW9lbCRlRThjaikg3BW355lbRqVe+iWru/rNRv8jiKcAKncA4eXEEd7qABTWAg4Rle4c15dF6cd+dj0Vpw8plj+APn8wfQOY/R</latexit>
Figure 2: Gaussian (left) and uniform (right) shaking examples.
In the matrix form for x ∈ Rd, we can compute ∇δθ(j)θ x = [∇δθ (j) θ x1,∇δθ (j) θ x1, . . . ,∇δθ (j) θ xd] T in a single run by computing (6) for all d dimensions of the states. Let’s define
∆θx , [∇δθ (1) θ x,∇δθ (2) θ x, . . . ,∇δθ (m) θ x] (7)
where ∆θx ∈ Rd×m and let Λ = [δθ(1), δθ(2), . . . , δθ(m)]. Therefore, if ∆δθθ x shows the directional derivative of x along δθ, we can write it as:
∇δθθ x = ∆θx(ΛTδθ) (8)
which is only a vectoral representation of eq. (4). Even though the linear formula of eq. (8) requires only m directional derivatives, it has two major downsides. First, it does not give a clear way to incorporate more than m training directional physical derivatives. Second, the linear approximation remains valid only for very small δθ. We propose Gaussian Process (GP) as a nonlinear probabilistic function approximator (Rasmussen, 2003) to capture the maps ĝt defined as
ĝt : Θ→ X (9) ĝt(δθ) = δx (10)
where subscript t shows the function that maps δθ to the change of the states δxt at time step t. We considered distinct functions for every time step. Taking into account the commonality among the function approximators corresponding to different time steps is deferred to future research. Learning this map requires training data that comes from an initial data collection phase called shaking. Shaking refers to perturbing parameters of the controller to obtain the set of trajectories produced by the perturbed controllers.
The perturbation can be either regular or stochastic. Stochastic perturbations have the advantage over regular perturbations that the agent does not need to be worried about perturbing the parameters in a particular direction. Besides, in some cases, perturbing the parameters of the policy in certain directions is infeasible. We propose two methods of shaking called Gaussian and Uniform shaking.
Gaussian shaking— Likely values of θ create nominal policies encoded by {θ(1),θ(2), . . . ,θ(m)}. We put Gaussian distributions centered at each of the nominal values resulting in a mixture of Gaussians. To reduce the hyper-parameters, we assume the variances of the Gaussians are themselves sampled from an exponential distribution making sure they all take positive values (See fig. 2 left). Here, we manually choose a reasonable value for the rate parameter of the exponential distribution. Doing inference on the hyper-parameters of the sampling distributions can be a topic for future research especially in active learning for a more clever less costly sampling stratgey.
Uniform shaking— In this setting, the state space of the changeable parameters of the policy is discretized and a uniform distribution is assumed around each value of this grid with some overlapping with the neighboring cells (See fig. 2 right).
We show the effect of each of these sampling methods later in section 4. We observed that the results are less sensitive to the hyper-parameters of the uniform sampling than Gaussian sampling. A carelessly chosen rate for the exponential distribution that generates the variances of the Gaussians in Gaussian sampling can result in too local or global sampling that gives rise to a large variance or bias in the estimated gradients.
3 REAL WORLD CHALLENGES
In this section, we present two major low-level challenges that are common when dealing with physical systems. There exist inherent noise and imperfection in the system that results in a change in the produced trajectories while the policy parameters are kept fixed. In our finger platform, we observed two different major sources of noise which are likely to occur in other physical systems too. We call them temporal and spatial noise for the reasons that come in the following.
Temporal noise. The temporal noise represented by n affects trajectories by shifting them in time xt ← xt + n for t = 0, 1, . . . , T. (11)
Notice that the absence of subscript t in n shows that this noise is not time-dependent, i.e., the time shift does not change along the trajectory as time proceeds.
Spatial noise. The trajectories affected by spatial noise cannot be aligned with each other by shifting forward or backward in time. We can model this noise as a state-dependent influence on the state of the system at every time step.
xt ← xt + nxt (12)
The following definition makes the distinction more concrete. Definition 1. Consider two trajectories T (1)(t) and T (2)(t) as two temporal signals. Assume St◦ is the shift-in-time operator defined as
St◦T (t) = T (t+ t◦) (13) for an arbitrary function of time T (t). We say T (2)(t) is temporally noisy version of T (1)(t) if
∃t◦ ∈ R s.t. ‖T (2) − St◦T (1)‖1 ≤ (14) where is a hyper-parameter threshold that reflects our prior confidence about the accuracy of the motors, joints, physical and electrical elements (in general construction process) of the robot. On the other hand, T (2) is called a spatially noisy version of T (1) if
@t◦ ∈ R s.t. ‖T (2) − St◦T (1)‖1 ≤ (15)
3.0.1 SOLUTION TO TEMPORAL NOISE
Fortunately, this type of noise is not state-dependent by definition. If we find out how much a trajectory is shifted in time with respect to another trajectory, we can simply shift the trajectory for those many time steps and compensate for the delay. Hence, the problem becomes detecting the lagged trajectories with respect to a reference trajectory and also estimate the amount of the required time shift to compensate for the delay. We can either use physical landmarks in the trajectories to align them or use the correlation between them as a measure of alignment. The later gave better results, hence, we postpone the description of the former to the appendix D.1.
Correlation-based delay estimation In this method, we use the correlation between zero-meaned trajectories T (i) and T (j) to check if one is the lagged version of the other one. The delay τ is found by
τ∗ = argmax τ T−τ∑ t=0 〈Sτx(i)t ,x(j)t 〉 (16)
where Sτ is a shift-operator by τ ∈ Z time steps. In practice, we take one trajectory of {T (1), T (2), . . . , T (M)}, e.g. T (r) as the reference and synchronize other trajectories with respect to it using eq. (16). The trajectories must be initially normalized to avoid trivial solutions where every trajectory is pushed towards the larger parts of the reference trajectory. For illustrative purposes, the plots of fig. 14 show a sample of the lagged trajectory from the finger platform and its correction by the above method.
3.1 SOLUTION TO SPATIAL NOISE
The spatial noise can be a stochastic function of the actuator, environmental change, and electronic drivers. In a perfect model of the transition dynamics xt+1 = f(xt,ut), applying the same control sequence {u0,u1, . . . ,uT−1} always results in the same sequence of states {x1,x2, . . . ,xT } when it starts from the same initial state x0. This assumption is often violated in physical systems as different runs of the same policy may result in different trajectories as can be seen in fig. 10 in the Appendix. The noise in the dynamics can be any function of states, input, and time. Therefore, it is difficult to model this noise since it requires a prohibitively large number of random experiments. The good news is that if the physical system is built properly, the effect of this noise is expectedly low. Based on our observations from the finger platform, we can assume the following.
Assumption 2. Limit on the physical noise: Let’s the control sequence U = {u0,u1, . . . ,uT−1} be applied to the system M times resulting in multiple sequence of states T (1), T (2), . . . , T (M). There exists a relatively small ζ such that
‖T (i) − T (j)‖∞ ≤ ζ for every i, j ∈ {1, 2, . . . ,m}. (17)
The word relatively here means that the change of the trajectory due to the inherent physical noise of the system must be small compared to the change of the trajectories when the parameters of the policy are perturbed.
To reduce the sensitivity of the estimated gradient to this unwanted spatial noise, we divide the state space of the physical system into regularly located adjacent cells called voxels. Each voxel vox(c) is represented by its center c and is defined as
vox(c) = {x ∈ X | ‖x− c‖∞ ≤ γ} (18) where γ is the parameter of the voxelization. The concept of the voxel is roughly used as a superstate. Every state that ends up within vox(c) gives rise to the same superstate. After recording the trajectories form the robot, every state is mapped to the center of the voxel it belongs to as
c← x for x ∈ vox(c) (19) After voxelization, we work with c instead of x. For example, all the gradients of (7) are computed as ∇θc rather than ∇θx. To illustrate the positive effect of voxelization of the state space, it can be seen in fig. 3 that increasing the voxel size improves the overlapping between two trajectories that deviate from each other due to the inherent spatial noise of the system not because of perturbing the parameters of the policy, but because of the inherent imperfection of the mechanical and electrical components of the system. This benefit comes with a cost which is the error introduced by voxelization. Fortunately, this error is bounded due to the following lemma Lemma 3. The error caused by voxelization is bounded and inversely proportional to the size of each voxel (see appendix F.1 for a brief proof).
After dealing with the challenge of inherent noise, we pursue the main goal of this paper which is estimating ∂T /∂θ directly from the physical system. In the following, we investigate the use of the different type of controllers to emphasize the extent of applicability of the proposed method.
4 EXPERIMENTS
In this section, we show how physical derivatives can be estimated in practice through several experiments. Notice that our work is different from computing gradients around the working point of a system by finite-difference. We aim to collect samples from such gradients by perturbing a grid of nominal values of the policy parameters and then generalize to unseen perturbations by Gaussian process as a probabilistic regression method. The experiments are designed to show each challenge separately and the efficacy of our proposed solution to it. Due to space constraints, details to the physical platform can be found in section A in the Appendix. See1 for videos of the robot while collecting data for different experiments and more backup materials.
1https://sites.google.com/view/physicalderivatives/
m 3
T = 10 T = 40 T = 80 T = 200
T = 200 T = 400 T = 600 T = 800
4.1 LINEAR OPEN-LOOP CONTROLLER
As a simple yet general policy, in this section, we consider an open-loop controller which is a linear function of time. The policy ut = [u1t, u2t, u3t] constitutes the applied torques to the three motors {m1,m2,m3} of the system and is assigned as
uit = wit+ bi for i = 1, 2, 3 (20)
Notice that the torque consists of two terms. The first term wit grows with time and the second term remains constant. The controller has 6 parameters in total denoted by θ. The task is to predict∇θxt for every t along the trajectory. In the training phase, the training data is obtained via shaking as described in section 2.
fig. 7 shows examples of nominal trajectories + trajectories produced by the perturbed controller and the computed derivatives. The arrows are plotted as they originate from the perturbed trajectories only for easier distinction. Each arrow corresponds to the change of the states at a certain time step on the source trajectory as a result of perturbing the policy. Each figure corresponds to a pair of nominal values of {w, b} for the linear open-loop controller. See fig. 29 for examples.
4.2 NONLINEAR OPEN-LOOP CONTROLLER
Physical derivatives can naturally be computed for either linear or nonlinear controllers which makes it different from taking the gradient of models through time. In model-based methods, if the model’s transition dynamics is not differentiable, taking the derivative is theoretically challenging. However, our method is taking advantage of the real physics of the system to compute the gradients regardless of whether the approximating model is differentiable or not. To elaborate more on this, we test our method for a simple but nonlinear policy, i.e., ut = A sin(ωt). The sinusoidal torque is applied
to either one or two motors of the system to investigate the performance of our method. We tested Gaussian and uniform shaking for θ = {A, ω} as parameters of this controller. The GP interpolation for the partial derivatives at some time instances along the trajectory can be seen in fig. 6 and more extensively in figs. 16 to 18 in the Appendix. One might be interested in the direction of the predicted derivative instead of its exact size. To this end, we take several test perturbations for every time step and use cos(α) as a measure of alignment between the predicted and ground-truth derivative vectors. The time evolution of the histogram of this measure along the trajectory shows a better alignment as time proceeds. This effect can be seen in figs. 27 and 28. This confirms our observation of initial transient noise in the system that dies out gradually by the progression of time. The overall performance of our method in predicting physical derivatices in unseen directions for two different shaking methods is shown in appendix E.
4.3 FEEDBACK CONTROLLER
Often in practice, the policy incorporates some function of the states of the system. Some wellknown examples which have been extensively used in control applications are P, PD, PI and PID controllers. Here, we consider two member of this family, i.e., P and PD controllers. The policy becomes u = Kpe for P controllers and u = Kpe + Kdė for PD controllers. The error e shows the difference between the current state x and the desired state x∗. The parameters of the controller {Kp,Kd} are scalar values that are multiplied by the error vector element wise. This implies that the controller parameters are the same for three motors leaving the controller of the whole platform with two parameters that weights the value and the rate of the error. We applied the uniform and Gaussian shaking for the set of parameters θ = {Kp,Kd} with different scenarios. The GP interpolation for the physical derivatives at some time instances along the trajectory can be seen in fig. 6 and more extensively in figs. 19 to 24 in the Appendix. The time evolution of the histogram of misalignment between predicted and ground-truth directional derivatives (see figs. 25 and 28 in the appendix) once again confirms the existence of the initial transient noise as was also observed in the section 4.2. Similar to the sinusoidal experiment, the overall performance of our method is presented in appendix E.
m 3
4.4 ZERO-SHOT PLANNING TASK
Our previous experiments in sections 4.1,4.2 and 4.3 showed that learning the physical derivative map is feasible for various types of controllers. In this section, we demonstrate an example of a constrain satisfaction task by means of the physical derivative map. In this experiment, the superscript (s) corresponds to the nominal trajectory which is called source. Assuem the system is controlled by a PD controller to reach a target state x∗, i.e., the control torques are designed as u = k
(s) p (x − x∗) + k(s)d ẋ. The controller does a decent job to reach the target state given rea-
sonable values for kp and kd. However, such controller does not give us a clear way to shape the trajectory that starts from x◦ and ends at x∗. Assume it is desired that the nominal controlled trajectory T (s) passes through an intermediate state x∗t at time t on its way towards the target state x∗ (we can equally assume that the system must avoid some regions of the state space because of safety reasons). The solution with physical derivatives is as follows . Assume k(s)d is fixed and only k(s)p is changeable. If the physical derivatives map is available, we have access to ĝt(k ∗ p − k(s)p ) = (x∗t − x(s)t )/(k∗p − k(s)p ). By simple algebraic rearrangement, we have
k∗p = x∗ − x(s)t
ĝt(k∗p − k(s)p ) + k(s)p . (21)
The new parameter of the policy is supposed to push the source trajectory T (s) towards a target trajectory T ∗ that passes through the desired state x∗t at time t. The result of this experiment on our physical finger platform can be seen in fig. 8.
4.5 RELATED WORKS
A truly intelligent agent must develop some sort of general competence that allows it to combine primitive skills to master a range of tasks not only a single task associated with a specified reward function. The major part of such competence come from unsupervised experiences. Animals use a similar competence to quickly adapt to new environments (Weng et al., 2001). and function efficiently soon after birth before being exposed to massive supervised experience (Zador, 2019). Due to its generality, such basic skills can be inherited over generations rather than being learned from scratch (Gaier & Ha, 2019). Despite traditional RL that the learning is driven by an extrinsic reward signal, intrinsically motivated RL concerns task-agnostic learning. Similar to animals’ babies (Touwen et al., 1992), the agent may undergo a developmental period in which it acquires reusable modular skills (Kaplan & Oudeyer, 2003; Weng et al., 2001) such as curiosity and confidence (Schmidhuber, 1991a; Kompella et al., 2017). Another aspect of such general competence is the ability of the agent to remain safe during its learning and deployment period (Garcıa & Fernández, 2015). In physical systems especially continuous control, stability is a major aspect of safety that implies states of the system converge to some invariant sets or remain within a certain bound (Lyapunov, 1992). Control theory often assumes the model of the system known to guarantee stability (Khalil, 2002). In the absence of the model, model-based RL learns the model along with
the policy. Hence, learning the transition model to predict the states in the future can be another intrinsic reward.
From a technical point of view, our work is relevant to sensitivity analysis and how it is used to train the parameters of models such as in Chen et al.’s NeuralODE. The method seemed to be effective in many tasks including learning dynamics (Rudy et al., 2019) , optimal control (Han et al., 2018), and generative models (Grathwohl et al., 2018). Our method can be seen as a mode-free sensitivity analysis in real-world systems. In NeuralODE, the gradient with respect to the parameters requires solving ODEs for both states and adjoint states that require a transition model. Since we are working directly on the physical system, we don’t need to calculate the integrals forward in time. The systems itself acts as a physical ODE solver. We refer to appendix F for a more detailed review of the related works.
5 CONCLUSION
In this paper, we present a method to learn the way that the trajectories of a physical real world dynamical system changes with respect to a change in the policy parameters. We tested our method on a custom-built platform called finger robot that allows testing a couple of controllers with various settings to show the applicability of our method for linear, nonlinear, open-loop, and feedback controllers. By estimating the physical derivative function, we showed that our method is able to push a controlled trajectory towards a target intermediate state. We investigate the real-world challenges when doing a fine sensitive task such as estimating physical derivatives on a real robot and proposed solutions to make our algorithm robust to inherent imperfection and noise in physical systems. We focused mainly on low-level issues of physical derivative and showing the feasibility of estimating it robustly. We expect that physical derivatives will contribute to research areas such as safety, control with constrain satisfaction and trajectory planning, robust or safe control.
A PHYSICAL PLATFORM
In this section, we introduce the physical robot on which we tested our method. The robot is called finger platform or simply finger throughout this paper. The range of movement for the motors are [0, π], [0, π], [0, 2π] respectively. The axes of the plots throughout the paper are in radian. It consists of three articulated arms with three degrees of freedom in total (see fig. 9d). The motors {m1,m2,m3} are depicted in the figure. This naming remains consistent throughout this paper. Each arm is moved by a separate brushless DC motor and has one degree of freedom to swing in its own plane (see fig. 9a). Each arm is equipped with an encoder that measures its angle (see fig. 9b). The brushless motors are controlled by an electronic driver that receives torque values applied to each motor from a computer terminal via a CAN bus and applies the torques to the motors(see fig. 9c). Due to the imperfections of the arms, motors, and drivers, we did not use any model for the system including the inertial matrix of the robot or the current-torque characteristic function of the motors. The low-cost and safe nature of this robot makes it a suitable platform to test the idea of physical derivatives that requires applying many different controllers in the training phase.
B ADDITIONAL PLOTS ILLUSTRATING REAL WORLD CHALLENGES (SECTION 3)
C APPLICATIONS OF PHYSICAL DERIVATIVES
If we know how the states of a trajectory change as a result of a change in the policy parameters, the policy can be easily updated to push the trajectory towards a desired one. For example, assume we are interested in going from the current trajectory T (θ) to the target trajectory T ∗. The distance between these trajectories can get minimized by perturbing the policy parameters in the direction −∂‖T (θ) − T ∗‖/∂θ. This direction is already available since we have estimated ∂T (θ)/∂θ as a physical derivative. As an exemplary case, we show this application of our method in practice in section 4. Other applications of physical derivatives are in robust control and safety. In both cases, the physical derivative allows us to predict the behaviour of the system if the policy changes in a neighbourhood around a nominal policy. Then, it is possible to make sure that some performance or safety criteria will not be violated for the local perturbation in the policy. As a concrete example, for an autonomous driving system, there can be a calibration phase during which physical derivatives of the car is estimated by perturbing the controller parameters around different nominal policies which are likely to occur in real roads. The calibration must be done in a safe condition and before deploying the system. When deployed, the estimated physical derivatives can be used to predict the effect of a change of the policy on the behaviour of the system and neutralize the change if it would move the car towards unsafe regions of its state space. The command that changes the policy can be issued by a high-level controller (e.g. guidance system), and the safety is confirmed by a low-level mechanisms through physical derivatives. This work focuses on the concept and introduction of physical derivatives and direct applications would go significantly beyond the scope of this work. In the following more detailed description of the use of physical derivatives in robust and safe control.
Robust control In control theory, robust control relates to the design of a controller whose performance is guaranteed for a range of systems and controllers belonging to a certain neighborhood around the nominal system (Zhou & Doyle, 1998). It is desired to have a controller that keeps the performance of the system at a certain good level even if the parameters of the controller are not fixed to the theoretical values. Assume the performance of the system is associated with some function of a trajectory E(T ). Changing the parameters of the controller θ results in a change in the trajectories. This allows us to compute ∂T /∂θ that consequently gives us ∂E(T )/∂θ by the chain rule. Roughly speaking, between two sets of parameters θ1 and θ2, the set of parameters that gives the least ∂E/∂θ is preferred. This means that by shaking the parameters of the controller and assessing the performance of the system, an estimate of the curvature of the landscape of E(T (θ)) is obtained. We prefer flatter regions of this space where a small change in θ does not cause a drastic change in the performance metric E .
Safety Safety refers to the situations in which the agent may hurt itself or the environment and causes irreversible damages if it freely takes arbitrary actions (Garcıa & Fernández, 2015). For a safety-critical system whose full physical models are hard to obtain, the physical gradients can assist in avoiding restricting the parameters of the robot to avoid unsafe behavior. The physical derivatives are learned in the Lab environment before the robot is deployed into the wild. For example, a rover whose mission is to safely explore an unknown environment often enjoys a learning loop that allows it to adapt to the new environment. Even though the learning in the new environment requires sufficient exploration, the physical derivatives can be used to give a rough simulation of the robots next few states under a given update to its parameters. The potential harmful updates might be detected by such simulation and be avoided.
D EXTENDED SET OF SOLUTIONS TO THE REAL WORLD CHALLENGES
D.1 DETECTING ZERO CROSSING
In this method, we take advantage of special landmarks in the trajectories. The landmarks are typically caused by physical constraints of the system. For example, when a robot’s leg touches the ground, the velocity of the leg becomes zero. Likewise, when a joint reaches its physical limit, the velocity of the connected arm to the joint becomes zero or changes sign. In both cases, a zero crossing occurs that can be used as a landmark to synchronize lagged trajectories with a reference trajectory. Even though this method will eliminate the temporal noise, it requires the presence of such landmarks along the trajectories. Notice that from a mathematical point of view, there is nothing special about zero. We can pick any value of states along a reference trajectory and synchronize all other trajectories with respect to it. However, in practice, physical landmarks are easier to detect and have less ambiguity that consequently gives a more accurate synchronization.
E EXPERIMENTAL DETAILS
Starting position in all the experiments is (π2 , π 2 , π). Task’s overall details are as following:
Task number of trajectories timesteps Linear (N) 640 1500 PD controller(N) 640 1500 PD controller(U) 1000 1500 Sine 1 joint(N) 640 5000 Sine 1 joint(U) 1000 5000 Sine 2 joints(U) 640 5000 Sine 2 joints(N) 1000 5000
In normal sampling cases, we ran 10 simulations for each set of λ parameters which indicates noise level.
E.1 LINEAR
uit = wit+ bi for i = 1, 2, 3 (22)
E.1.1 GAUSSIAN SAMPLING
wi = Wi + w,i for i = 1, 2, 3
w,i ∼ N(0, ew × ‖Wi‖2) ew ∼ exp(λw) for λw = 1, 5, 10, 50, 100, 500, 1000, 5000
bi = Bi + b,i for i = 1, 2, 3
b,i ∼ N(0, eb × ‖Bi‖2) eb ∼ exp(λb) for λb = 1, 5, 10, 50, 100, 500, 1000, 5000
W = [0.00001, 0.0001,−0.00001], B = [−0.28,−0.15,−0.08]
E.2 PD CONTROLLER
Final destination is ( π10 , 3 π 4 , 7 π 12 )
E.2.1 GAUSSIAN SAMPLING
kp = KP +
kp ∼ N(0, ekp × ‖KP‖) ekp ∼ exp(λkp) for λkp = 1, 5, 10, 50, 100, 500, 1000, 5000
kd = KD +
kd ∼ N(0, ekd × ‖KD‖) ekd ∼ exp(λkd) for λkd = 1, 5, 10, 50, 100, 500, 1000, 5000
E.2.2 UNIFORM SAMPLING
kp ∼ U(−0.5, 1.5),KP = 1
kd = KD = 0.01
E.3 SINE 1 JOINT
E.3.1 GAUSSIAN SAMPLING
w = W +
w ∼ N(0, ew × ‖W‖) ew ∼ exp(λw) for λw = 1, 5, 10, 50, 100, 500, 1000, 5000
a = A+
a ∼ N(0, ea × ‖A‖) ea ∼ exp(λa) for λa = 1, 5, 10, 50, 100, 500, 1000, 5000
W = 0.01, B = 0.5
E.3.2 UNIFORM SAMPLING
w ∼ U(0.005, 0.015), a = A = 0.5
E.3.3 SINE 2 JOINTS
E.3.4 GAUSSIAN SAMPLING
wi = Wi + for i = 1, 2
w,i ∼ N(0, ew × ‖W‖2) ew ∼ exp(λw) for λw = 1, 5, 10, 50, 100, 500, 1000, 5000
ai = Ai + for i = 1, 2
a,i ∼ N(0, ea × ‖A‖2) ea ∼ exp(λa) for λa = 1, 5, 10, 50, 100, 500, 1000, 5000
W = [0.01, 0.01], A = [−0.4, 0.5]
E.3.5 UNIFORM SAMPLING
wi ∼ U(0.005, 0.015) for i = 1, 2, a = A = 0.5
E.4 GP SCORE:
Definition of the GP score: The score is defined as (1−u/v), where u is the residual sum of squares Σ(ytrue−ypred)2 and v is the total sum of squares Σ(ytrue−mean(ytrue))2. The best possible score is 1.0.
E.5 ZERO-SHOT PLANNING TASK:
For the task of section 4.4: Number of training trajectories: 100 each with 1500 time steps
Kd = 0.01
Kp = Uniformly sampled from [0.2, 0.6]
Initial point: X◦ = [π/2, π/2, π])
desired position = [π/10, 3 ∗ π/4, 7 ∗ π/12]
F DETAILED LITERATURE REVIEW
There has been a recent surge of interest in unsupervised methods in reinforcement learning when a task-specific reward function is not the only driving force to train the agent (Baranes & Oudeyer, 2013; Bellemare et al., 2016; Gregor et al., 2016; Hausman et al., 2018; Houthooft et al., 2016). A truly intelligent agent must behave intelligently in a range of tasks not only in a single task associated with its reward function. This requires the agent to develop some sort of general competence that allows it to come up with solutions to new problems by combining some low-level primitive skills. This general competence is a key factor in animals to quickly and efficiently adapt to a new problem (Weng et al., 2001). By calling the traditional RL, extrinsicially motivated RL, the new framework is called intrinsically motivated RL. There have been many ideas in this line with various definitions for the terms motivation and intrinsic. Some researchers assume a developmental period in which the agent acquires some reusable modular skills that can be easily combined to tackle more sophisticated tasks (Kaplan & Oudeyer, 2003; Weng et al., 2001). Curiosity and confidence are other unsupervised factors that can be used to drive the agent towards unexplored spaces to achieve new skills (Schmidhuber, 1991b; Kompella et al., 2017). Interestingly, there are observations in neuroscience that dopamine, a known substance that controls one’s motivation for extrinsic rewards, is also associated with intrinsic properties of the agent such as novelty and curiosity. A novel sensory stimulus activates the dopamine cells the same way they are activated by extrinsic reward. Children build a collection of skills accumulatively while they engage in activities without a specific goal, e.g., hitting a ball repeatedly without a long-term target such as scoring a goal. The achieved skills contribute to their stability while handling objects (Touwen et al., 1992).
<latexit sha1_base64="bCZFEUWsQlA6jPW7lSpTBjLMvvo=">AAACEHicbVC7SgNBFJ2NrxhfUUubwSDGwrAbBW2EoBaWEcwDsiHMTmaTIbOz68xdJSz7CTb+io2FIraWdv6Nk0eh0QMXDufcy733eJHgGmz7y8rMzS8sLmWXcyura+sb+c2tug5jRVmNhiJUTY9oJrhkNeAgWDNSjASeYA1vcDHyG3dMaR7KGxhGrB2QnuQ+pwSM1MnvuzLGZ9j1FaFJt+jqWwWJ46T4EJcP0sS9ZAIIvk87+YJdssfAf4kzJQU0RbWT/3S7IY0DJoEKonXLsSNoJ0QBp4KlOTfWLCJ0QHqsZagkAdPtZPxQiveM0sV+qExJwGP150RCAq2HgWc6AwJ9PeuNxP+8Vgz+aTvhMoqBSTpZ5McCQ4hH6eAuV4yCGBpCqOLmVkz7xGQDJsOcCcGZffkvqZdLzlGpfH1cqJxP48iiHbSLishBJ6iCrlAV1RBFD+gJvaBX69F6tt6s90lrxprObKNfsD6+AV2Om4o=</latexit>
Another line of work concerns the fundamental constraints of the agent/environment and ensures those constraints are met while learning. For example, in many practical systems, learning episodes must halt if the system is likely to undergo an irreversible change. For example, the training episodes of a fragile robot must ensure the robot does not fall or will not be broken in any circumstance while acting under a certain policy. The general name safe RL embodies ideas to tackle such issues in current interactive learning algorithms (Garcıa & Fernández, 2015). One major aspect of safety is stability that loosely means that states of the system converge to some invariant sets or remain within a certain bound (Lyapunov, 1992). Control theory enjoys a physical model of the system to guarantee stability (Khalil, 2002). When the physical model is not known in advance, the model is either learned along with the policy (model-based RL) or will be implicitly distilled in the value function (model-free RL) (Sutton & Barto, 2018). Stability can be categorized as an intrinsic motivation for the agent. No matter what task the agent aims to solve, it must remain stable all the time. Learning the transition model which is the major concern of model-based RL can also be seen as intrinsic motivation. The agent learns to predict the future step given the current state. The advantage of learning a model—even inaccurately—is twofold: the agent would know where to go and where not to go. It knows which regions of the state space is unsafe to explore and must be avoided. It also knows which regions are unexplored and might be informative to improve the model. This brings us to another view to intrinsic reward that encourages diversity.
Our work is also relevant to sensitivity analysis and its use in trainig the parameters of dynamical models. After Chen et al.’s NeuralODE on training neural networks by sensitivity analysis of the network parameters, the method was successfully applied to various tasks such as learning dynamics (Rudy et al., 2019) , optimal control (Han et al., 2018), and generative models (Grathwohl et al., 2018). Our method can be seen as a mode-free sensitivity analysis in real-world systems. In NeuralODE, the gradient with respect to the parameters requries solving ODEs for both states and adjoint states that require a transition model. Since we are working directly on the physical system, we don’t need to calculate the integrals forward in time. The systems itself acts as a physical ODE solver.
The importance of learning from unlabelled experiences is a known fact in animals. Many animals function efficiently soon after birth before being exposed to a massive labeled experience. Part of it might be due to unsupervised learning but the major part of the story can be a genetic heritage after years of evolution that Zador called genomic bottleneck. The same idea turned out to be valid in statistical learning where an automatically discovered neural network architecture peforms surpsingly well with a shared random weight (Gaier & Ha, 2019). The embedded inductive bias in the neural network architectures could be analogous to the wiring of the brain of animal babies which transfers from generation to generation by genes.
F.1 PROOFS
Proof to the lemma on voxelization error.
Proof. The voxels become boxes in 3D as in fig. 15. The gradient is estimated as the distance between two points in 3D coordinates. Hence the source of voxelization error is approximating the distance between two points in 3D with the distance between the centers of the corresponding boxes to which those points belong. This error is written next to the boxes in fig. 15. The maximmum error is inversely proportional to the distance between voxels. Meaning that the voxels which are located far away will induce less voxelization error. This is intuitively clear. When two points are too distant from each other, a slight change in their position would not change the distance between them considerably. The upper bound on the error, however, occurs for a single voxel where the error is bounded by the size of the voxel.
G MORE RESULTS
In this section, the results of the extra experiments that were eliminated from the main text due to the space limit are presented.
The following figures show GP models trained by a set of directional derivatives collected during the shaking phase. The results are provided for the experiments of sections 4.2 and 4.3.
T = 2350 T = 3350 T = 3850
T = 270 T = 390 T = 500 T = 830
T = 70 T = 110 T = 210
T = 1090 T = 1210 T = 1330 T = 1440
T = 250 T = 310 T = 360T = 190
T = 210 T = 280 T = 460 T = 710
T = 270 T = 370 T = 420 T = 550
T = 2450 T = 3350
T = 1250 T = 1550 T = 1850 T = 2150
T = 1250 T = 1550 T = 1850 T = 2150
T = 50 T = 350 T = 650 T = 950 | 1. How effective are the proposed method's ability to learn physical derivatives in terms of its usefulness for downstream tasks, such as optimizing a controller for a cost function?
2. How do the learned predictions compare to actual resulting perturbations, particularly in more complex domains and for downstream tasks?
3. How does the method scale with high dimensional state spaces, even with a small parameter space?
4. How does the proposed approach compare to learning GP dynamics models in terms of estimating physical derivatives?
5. What is the purpose of Figure 4, and how does it relate to the rest of the paper?
6. Are there any other experiments or comparisons that could be added to further validate the effectiveness of the proposed method?
7. How stable are the estimated gradients when the system is not very stable, especially in regions where the dynamics are propagated through time?
8. How does the method handle temporal noise, and are there any experiments that demonstrate its effectiveness in this regard? | Review | Review
The paper proposes learning physical derivatives, the derivative of the trajectory distribution with respect to policy parameters. The proposed method estimates changes in trajectories at a particular theta by using finite differences,
then fitting Gaussian Processes per timestep to generalize to new dtheta's. The paper then proposes techniques to robustify the process against noise in the system.
To deal with temporal noise, where trajectories are approximately equal up to a time shift, they simply
estimate the optimal shift and use the shifted version to estimate. To address more complicated noise, they assume sensitivity of trajectories to noise is small relative
to sensitivity to the parameters, and discretize the state space at a level such that trajectories that
differ primarily due to inherent noise look the same at the discretized level, while perturbed policy
parameters still lead to different trajectories. They then use the discretized trajectories to estimate
the finite differences.
Experiments illustrate how the learned predictions compared to actual resulting perturbations and
illustrate resulting trajectories on certain toy domains and a physical robotic finger. All experiments
are done with very low dimensional state and policy spaces (1-3 dimensions each).
Without much more extensive experimental validation, this paper should be rejected. While I am not aware
of any prior work on learning physical derivatives, the actual methods used are not novel in of themselves
beyond being applied towards learning derivatives with respect to the policy. As such, the method should be
of practical interest in order to be accepted. With limited experiments on a single very low dimensional
domain and no comparisons against any alternative methods, there is little evidence demonstrating the actual
effectiveness of the proposed method, especially on more complex domains and for downstream tasks.
Suggested Experiments:
- Stability in gradient estimation
It seems like it could require huge amounts of samples to be able to estimate gradients at parameters
where for which the system is not very stable, as states in at later timesteps can easily change in
hard to predict ways as the dynamics are propogated through time. This should be an especially big issue
if we do not already have good stable controllers close to the desired solution and needed to actually
conduct exploration in parameter space to solve a taask. I would appreciate more extensive evaluation
across multiple different (simulated) domains and assessing the effectiveness of gradient estimation
along random parameters.
- Dimensionality of policy parameters and state spaces
The current experiments only involve very small parameter spaces. It would be important to see how well
using finite differences and GP regression scales with a higher dimensional search space, which can be
demonstrated on varying dimensionalities of an LQR system for example. It would also be important to see
how gradient estimation scales with high dimensional state spaces even with a small parameter space (like
in the PD controller experiment in the paper).
- Direct Comparison against learning dynamics models
Using the same data, compare (with same metrics as in table 1) physical derivatives estimated with the
proposed approach against learning GP dynamics models and rolling out the perturbed policy with the learned
model. Without a direct comparison against learning dynamics models and understanding what situations
learning physical derivatives provides better estimates, it is unclear when or why one would prefer to learn
physical derivatives in this way compared to a model based approach.
- Quantitative results measuring costs of learned controllers
Despite the name of the paper and a description of how to compute a policy gradient via physical derivatives,
there are no experiments involving such policy gradient updates as far as I can tell. While one advantage of the
method (as well as model based approaches) is the ability to learn in unsupervised manner, it would be extremely
helpful to validate how well the physical derivatives are estimated in terms of how useful they are for a downstream
task, such as optimizing a controller for a cost function. Right now, experimental results lack any comparisons
to other methods or any other way to assess the effectiveness of estimating physical derivatives.
A comparison against regular RL policy gradient methods (or other model free algorithms) and model based RL
would give an idea as to whether the physical derivatives learned are actually useful.
Other questions and comments:
- Table 1: is this evaluating the accuracy of the physical derivatives on the shaking data that it was
used for learning, or on a validation set? If on a validation set, would the validation perturbations be
drawn from the same distribution as the training set?
- The zero shot planning experiment in section 4.4 seems very contrived. It does not seem like a useful task
to adjust the parameters of the PD controller in order to reach a state that isn't the target. The figures
illustrating trajectories are also not very convincing and unclear. Two points are labelled source state and
target state, but it is not clear which is the intermediate state it is supposed to reach. In any case, most
of the trajectories seem to vastly overshoot the target? final state, and it is hard to assess how close the
trajectories end up being to the intended states from a 2d representation of a 3d space. Quantitative results
would perhaps have been more useful in illustrating the effectiveness of using physical derivatives.
- What is the purpose of figure 4? It does not appear to be referenced in the text and it is not clear what is
being shown.
Other notes not part of decision:
Paper exceeds the 8 page recommended length
Lots of small typos in the text |
ICLR | Title
Efficient Certification for Probabilistic Robustness
Abstract
Recent developments on the robustness of neural networks have primarily emphasized the notion of worst-case adversarial robustness in both verification and robust training. However, often looser constraints are needed and some margin of error is allowed. We instead consider the task of probabilistic robustness, which assumes the input follows a known probabilistic distribution and seeks to bound the probability of a given network failing against the input. We focus on developing an efficient robustness verification algorithm by extending a bound-propagation-based approach. Our proposed algorithm improves upon the robustness certificate of this algorithm by up to 8× while with no additional computational cost. In addition, we perform a case study on incorporating the probabilistic robustness verification during training for the first time.
1 Introduction
Neural networks have found great success in a wide variety of applications. For many of these applications, understanding when and how a neural network can fail is crucial. Szegedy et al. (2014) found that almost visually imperceptible perturbations of input images could drastically change the output of a neural network. This realization set off a large area of research, both in finding ways of attacking neural networks and developing defense and certification methods against said attacks. The most common setting which is considered is the worst-case robustness given an lp norm. There have been a number of adversarial robustness certification methods which operate by finding provable lp balls for which the output of the neural network is guaranteed to be constant. In other words, they find lower bounds on the minimum lp radius for which an adversarial attack exists. This is done by relaxing the network for a bounded domain. Convex relaxations are typically used (Salman et al., 2019), although some works also consider quadratic program formulations (Raghunathan et al., 2018). Note that these are input-specific, so they do not give guarantees about the entire input domain.
As opposed to the worst-case adversarial robustness, there is also great interest in the threat model where the primary concern is natural noise and corruption rather than a malicious adversary and thus the perturbations follow some probability distribution. Typical neural networks have been found to be vulnerable to common corruptions (Dodge & Karam, 2016; Geirhos et al., 2018). We distinguish between these two notions of robustness as adversarial robustness and probabilistic robustness. We note that the terms such as corruption robustness, probability of violation, and adversarial density have also been used to refer to the general concept of probabilistic robustness. For the threat model of natural noise and corruption, there has not been as much work developed as for worst-case adversarial robustness. One of the very first probabilistic verification algorithms without imposing assumptions on the decision boundary is known as the PROVEN algorithm (Weng et al., 2019). In PROVEN, the authors derive probabilistic verification bounds based on existing worst-case robustness verification such as Fast-Lin and CROWN (Weng et al., 2018; Zhang et al., 2018).
Contributions. In this work, we generalize the PROVEN algorithm proposed by Weng et al. (2019) to infinite support and greatly improve the tightness of the robustness certificate without additional computation cost. We name our algorithm I-PROVEN, as the proposed
algorithm is a significantly improved version of PROVEN algorithm. The I-PROVEN algorithm can achieve significant improvements (2×-8× tighter) on the tightness of probabilistic robustness certificate for both MNIST and CIFAR-10 models without additional computation cost, and it also enables the certification of probability distributions with infinite support. Based on our proposed algorithm, we conduct a case study on augmenting an existing training pipeline with probabilistic robustness verification bounds, and we find mixed results for the training. We examine potential causes and implications.
2 Background and related works
Notation. In order to describe related work in depth, we will lay out some notation. We define a K-layer feed-forward ReLU neural network with n0 inputs and nK outputs f : Rn0 → RnK as
f(x) = f (K)(x)
f (i+1)(x) =W (i+1)σ(f (i)(x)) + b(i+1)
f (1)(x) =W (1)x+ b(1)
In other words, f (i)(x) denotes the vector of pre-activation values in the ith layer. Generally, we work in the setting of image classifiers where a class c is classified over a class i if fc(x) − fi(x) > 0. To simplify notation, we assume that the neural networks f which we are working with have already had this margin function applied to it for some given c, i. In other words, we assume nK = 1 for convenience and we are interested in when f(x) > 0.
2.1 Adversarial robustness verification
Adversarial robustness verification asks, given a neural network f and a region R( ) in the input space, does there exist an x ∈ R( ) such that f(x) ≤ 0? To solve this problem, we can formulate it as an equivalent optimization problem: minx∈R( ) f(x). If no such x such that f(x) ≤ 0 exists, or equivalently, if the minimum f(x) is positive, then f is robust for region R( ). If we can prove that f is robust on regions R( ) for all ≤ , then the robustness certificate is . The robustness certificate is a lower bound for the true minimum distortion ∗.
The regions R( ) of general interest are Lp balls Bp(x0, ) for a given image or input x0. This arises from the interpretation that an adversary is perturbing x0 by at most under a given Lp norm. Note that certification only informs robustness about a single image. As far as we know, it is infeasible to certify an entire dataset other than processing it image by image.
Convex relaxation for provable verification. These methods find a convex relaxation of a neural network in order to find provable certifications for its adversarial robustness. We will discuss these methods in detail as our method builds on them in certain ways. There are a number of works following these methods (Weng et al., 2018; Singh et al., 2018; Zhang et al., 2018; Singh et al., 2019b), and a general framework for them is described in (Salman et al., 2019). We will use the setup used in CROWN (Zhang et al., 2018). In these methods, inequalities on pre-activation neurons are recursively computed
l(j) ≤ A(j)L x+ b (j) L ≤ f (j)(x) ≤ A(j)U x+ b (j) U ≤ u (j)
for each layer j. Note that these are element-wise bounds, l(j) and u(j) are scalar vectors, and A
(j) L , b (j) L , A (j) U , b (j) U are linear transformations of inputs x. These linear bounds are obtained by relaxing the non-linear activation functions to linear lower and upper bounds given that the inputs to the activation functions are within some interval found from the inequalities applied to earlier layers, l(i), u(i), i < j. These inequalities are propagated backwards through the network until the original input is reached. Under the typical lp ball threat model, Hölder’s inequality can give scalar bounds on these layers and the process can continue to the final outputs. (Singh et al., 2019a; Tjandraatmadja et al., 2020) have made progress on improving these bounds beyond the convex relaxation gap pointed out by Salman et al. (2019) by considering the activation functions on multiple neurons jointly.
2.2 Probabilistic robustness
For probabilistic robustness, we are considering a known probability distribution D : Rn → [0, 1] which the inputs x are sampled from. We will focus on additive iid uniform noise which we denote, in an abuse of notation, as B∞(x, ). In other words, B∞(x, ) is the distribution generated by sampling points evenly from the hyperrectangle [x1 − , x1 + ]× [x2 − , x2 + ]× · · · × [xn − , xn + ]. Then the problem of probabilistic robustness verification is to verify that
Prx∼D [f(x) > 0] ≥ 1−Q (1)
for some given failure probability Q.
We can define the robustness certificate similarly to how it was done for adversarial robustness. Weng et al. (2019) and Anderson & Sojoudi (2020) particularly consider the maximum parameterizing D for which the above holds and we also provide such results. This is found by binary searching over , as empirically we find that the robustness is monotonic in for the distributions we consider, but we note that there are no theoretical guarantees for this.
Sampling methods. Sampling gives well-established statistical guarantees which can be applied to the problem of probabilistic robustness. By using the neural network essentially as a black-box, Chernoff bounds can estimate the probability of the neural network giving the incorrect classification given a distribution. This has the advantage of making no assumptions on the model or the distribution, but requires a large number of samples to achieve a high degree of accuracy and there is an inherent uncertainty present in the application of such an algorithm. Baluta et al. (2021) notes for example, that proving that the probability is between [0.1− 0.5× 10−4, 0.1 + 0.5× 10−4] with a confidence of 0.9 would require 5.5× 106 samples. To overcome this, they propose a framework which reduces the number of samples necessary, although this is dependent on the true probability. Anderson & Sojoudi (2020) also provide a method that can find upper bounds on the probability that a model is incorrect with a small number of samples. Webb et al. (2018) uses a clever sampling method that leverages the layered structure of common architectures. They require upwards of 107 samples but are able to obtain precise estimations. Though they are unable to provide theoretical guarantees, they show that empirically, their estimations agree with naive Monte Carlo estimates with as many as 1010 samples.
2.3 Training adversarially-robust models
Training methods for improving adversarial robustness have generally taken two paths. The first augments the data with adversarial attacks in order to strengthen a model’s resistance to such attacks (Madry et al., 2017). The second approach adds loss regularization terms that help the model learn robust features of the data. Xiao et al. (2018) identifies weight sparsity and ReLU stability as important factors in a model’s adversarial robustness and builds a training framework which incorporates these. Other works use certification methods as regularization terms in order to improve the certifiable robustness of a model. Interval bound propagation (IBP) has found great success in this despite being a relatively loose certification method (Gowal et al., 2019). In particular, the efficiency of IBP has made it amenable to training. A number of other works have made progress in closing this efficiency gap (Zhang et al., 2019; Xu et al., 2020; Shi et al., 2021; Boopathy et al., 2021).
3 Our main results
In section 3.1, we illustrate the idea of deriving tighter probabilistic robustness certificate and provide the details of our I-PROVEN algorithm. We remark on alternative setups in section 3.2. In section 3.3, we conduct a case study on including our proposed probabilistic verification bounds into standard training. All experimental results are reported in section 4.
3.1 I-PROVEN: Improving the tightness of PROVEN algorithm
In this subsection, we will show how we could build on top of the state-of-the-art PROVEN algorithm (Weng et al., 2019) to derive tighter probabilistic robustness certificate in IPROVEN. Using convex relaxation methods, one can find linear bounds with respect to the input such that,
A (K) L x+ b (K) L ≤ f(x) ≤ A (K) U x+ b (K) U , ∀x ∈ supp(D). (2)
PROVEN argues that by applying probabilistic bounds
Pr x∼D
[A (K) L x+ b (K) L ≥ 0] ≥ 1− q, (3)
we can conclude that f(x) ≥ 0 with probability at least 1 − q. Notably, probabilistic inequalities are only considered in the final layer. Thus, they are able to give a probabilistic robustness certification.
To extend this method, we must look into how these linear bounds AL, bL, AU , bU were obtained. This is done recursively: every layer’s pre-activation neurons are bounded by linear bounds with respect to the input. Then, scalar bounds are obtained using Hölder’s inequality on the linear bounds and the support of the input. These scalar bounds define intervals in the domain of each activation function for the next layer. This allows linear bounds to be calculated over the activation function and for the next layer to continue propagating linear bounds.
The key observation is that these linear bounds in previous layers can also apply with some probability rather than strictly and that the failure probabilities accumulate linearly. Assume that we have some q′ for which
Pr x∼D
[f(x) ≥ A(K)L x+ b (K) L ] ≥ 1− q ′. (4)
Then a simple union bound gives that the probability of f(x) ≥ 0 is at least 1− q − q′. This is a simple scenario which can be extended to involve all layers of the network. Theorem 1. In the convex-relaxation framework, for each scalar inequality i (l ≤ ALx+ bL or u ≥ AUx+ bU ), denote qi as the probability that this inequality is violated with respect to the probability distribution D which x is sampled from. Then the probability of the final output of the convex-relaxation algorithm holding for a given x ∼ D is ≥ 1− ∑ i qi.
Proof. The convex-relaxation framework operates by making a series of m inequalities which ultimately lead to the output layer. We can label these L1, L2, . . . , Lm. When we apply probabilistic bounds, these inequalities may not be guaranteed. We will denote Ej to be the event that Lj is correct. Even though there is the chance of failure, we will operate assuming that all inequalities were correct. If they are indeed all correct for some x, then we can conclude that f(x) ≥ l(K) for this particular x as expected. Then it suffices to find a lower bound on the probability of the intersection of the Ej ’s. We have
Pr x∼D
[ f(x) ≥ l(K) ] ≥ Pr x∼D ⋂ j Ej = 1− Pr x∼D ⋃ j Ej ≥ 1−∑ j Pr x∼D [ Ej ] = 1− ∑ j qj (5)
as desired.
Now we will take advantage of this theorem. Assign every scalar inequality i (either ALx+ bL ≥ l or AUx+ bU ≤ u) a failure probability qi which sum to Q altogether. We can invert the probabilistic inequalities to find scalars l and u such that these hold with given failure probability qi. In particular, say that functions γL(a, b) and γU (a, b) are such that
γL(a, b) ≤ Pr x∼D [ax+ b > 0] ≤ γU (a, b)
for any a, b. We want to choose l such that Pr[ALx+ bL < l] ≤ qi. Thus, it suffices to solve for l such that γL(AL, bL − l) = 1− qi.
Algorithm 1 I-PROVEN for B∞(x, ) 1: procedure PrIneq(x, , AL, bL, AU , bU , q) 2: lstrict → ALx+ bL − ||AL||1 3: ustrict → AUx+ bU + ||AU ||1 4: lprob → ALx+ bL − √ 2 ln 1q ||AL||2
5: uprob → AUx+ bU + √ 2 ln 1q ||AU ||2
6: return max(lstrict, lprob),min(ustrict, uprob) 7: end procedure 8: procedure IPROVEN(x, , f , Q) 9: q → Q/(2× number of neurons)
10: l(1), u(1) → PrIneq(x, ,W1, b1,W1, b1, q) 11: for i in [2, K] do 12: A(i)L , b (i) L , A (i) U , b (i) U → GetLinearBounds(l(:i), u(:i), A (:i) L , b (:i) L , A (:i) U , b (:i) U ) 13: l(i), u(i) → PrIneq(x, , A(i)L , b (i) L , A (i) U , b (i) U , q) 14: end for 15: return l(n) 16: end procedure
For the uniform distribution B∞(x, ), such functions γL and γU can be derived from Hoeffding’s Inequality as seen in Corollary 3.2 of (Weng et al., 2019). Inverting them, we obtain
l = ALx+ bL − √ 2 ln 1
qi ||AL||2. (6)
Performing the forward and backward bound propagation methods ultimately yields a final scalar lower bound l(n). By theorem 1, we have that
Pr x∼D [f(x) ≥ l(n)] ≥ 1− ∑ i qi = 1−Q.
We choose a simple scheme in assigning the failure probabilities. We select some subset of layers S and assign all inequalities pertaining to these layers equal qi’s. All other qi’s are set to 0. So
qi =
{ Q
2× number of neurons within layers of S if i ∈ S 0 if i 6∈ S
(7)
qi = 0 indicates using the strict bounds and is only possible when D has bounded support, as with B∞(x, ). In this case, the Hölder inequality for the l∞ norm is used. Note that the original PROVEN algorithm is equivalent to only selecting the last layer.
We found that more complex assignment strategies such as optimizing over q with sum equal to 1 did not lend themselves to any significant improvements compared to this simple method, especially given their additional run-time cost. However, there are a few factors to keep in mind when choosing the subset of layers S. In the case of uniform bounded noise, if the non-zero qi are small, it is possible for the strict inequalities, which find
lstrict = ALx+ bL − ||AL||1, ustrict = AUx+ bU + ||AU ||1 (8)
to give tighter intervals than the probabilistic inequalities.
The different weight norms also create some discrepancy in the effectiveness of the probabilistic bounds. In particular, although the matrices A in the above equations are not directly the network weights, we found that I-PROVEN performed worse on models with sparse weights. To take these into account, we return the tightest intervals using either the strict or probabilistic bounds, although we do not update the q’s to incorporate our choice.
3.2 Other distributions and certifiers
I-PROVEN can support distributions with infinite support. The only requirement which I-PROVEN has on the distribution D is that we must have lower and upper bounds on Prx∼D[ax+ b > 0] for a, b ∈ R. This may include distributions with infinite support. For example, for additive iid Gaussian noise with standard deviation , we can obtain
l = ALx+ bL − erf−1(1− 2qi) ||AL||2 (9) from basic facts about Gaussian distributions. Similar formulas can be found for Gaussian mixtures when such a distribution is known and relevant. Note that qi must be non-zero for all inequalities when dealing with distributions with infinite support, as strict alternatives no longer exist.
I-PROVEN can be used with any linear-relaxation-based certifier with no additional computational cost. As I-PROVEN only requires changing the evaluation of the scalar bounds, it has no additional time complexity cost to whatever method it is being used with. It can be incorporated in any such linear-relaxation method. For a network with K layers and n neurons at each layer, the entire method when used with CROWN is O(n3K2) (Zhang et al., 2018), an additional factor of K compared to a pass of the network.
3.3 Probabilistic verification based training
Our training method is simply a substitution of PROVEN into CROWN-IBP (Zhang et al., 2019). It requires two forward passes. The first computes strict bounds on each pre-activation neuron using IBP. The second performs linear bound propagations to compute linear bounds on the output. Then probabilistic bounds are applied to obtain the final scalar lower and upper bounds which are used in the loss function as described in CROWN-IBP’s original paper. Note that this effectively means that only the original PROVEN is applied in this training method.
4 Experiments
4.1 Implementation details
We performed experiments on the MNIST and CIFAR-10 datasets (LeCun et al., 2010; Krizhevsky, 2009). For our verification experiments, we used pretrained models provided in (Weng et al., 2018) and (Zhang et al., 2018) and the images from each dataset were scaled so that their values were in [−0.5, 0.5]. For our training experiments, we used the architectures described in (Gowal et al., 2019) and the images in the dataset were scaled to be in [0, 1]. Our MNIST models were trained for 100 epochs with a batch size of 100 and with a warm-up period for the first 5 epochs and a ramp-up period for the next 45. We followed the β and κ schedule used in (Zhang et al., 2019) for the PROVEN-IBP loss and we increase our from 0 to 4× 10−1 during the ramp-up period. We evaluate the validation set using = 3× 10−1. All code was written in Python with the use of the PyTorch library (Paszke et al., 2019) and training was conducted on a NVIDIA Tesla V100 GPU.
4.2 Verification results
We compare the original PROVEN bounds with our improved PROVEN on various MNIST and CIFAR-10 classifier in table 1. The classifier are k-layer MLPs with n neurons in each layer, denoted as k × [n]. In I-PROVEN, qi are all non-zero and equal across all layers. It achieves much better performance than the original PROVEN (Weng et al., 2019) with a 600% increase for 99.99% probabilistic robustness. Note that Q = 0 is simply the adversarial robustness certificate.
I-PROVEN’s improvement over PROVEN can be observed from intermediate layers’ bounds. In table 2, the average interval gap of the neurons in a layer is tabulated. The model in particular is a 4-layer MNIST MLP (layer 0 indicates the input, which is why the interval length is simply 2 ). PROVEN and I-PROVEN are each evaluated on 10 images for various
,Q. The scalar intervals l, u which give bounds on each neuron in both methods are averaged within each layer. The smaller the interval size (that is, the tighter the interval), the better. I-PROVEN’s intervals are noticeably tighter from the first layer onwards as it invests a portion of the total failure probability Q to tighten these earlier inequalities. This does mean PROVEN’s final inequalities are sharper than I-PROVEN’s, which we can notice by observing the ratio of interval widths between Layer 3 and Layer 4. For = 0.001, Q = 0.1 for example, PROVEN goes from 0.045 to 0.564, a 12.5× increase, while I-PROVEN goes from 0.09 to 0.203, a 22.5× increase. However, the improvements I-PROVEN makes in the earlier layers means it still ends up with tighter final bounds than PROVEN.
Note that we can our method is also able to handle additive Gaussian noise with only small changes to our probabilistic inequalities. Notably, we do not need to truncate the Gaussian distribution as (Weng et al., 2019) did. We plot our results for both additive uniform noise and additive Gaussian noise in fig. 1.
4.3 Comparison to other methods
In the earlier section, we compared I-PROVEN to PROVEN in terms of their robustness certificates. Now, we will show comparisons to IBP and a simple Monte Carlo approach based on (Anderson & Sojoudi, 2020). IBP does not generally perform well for arbitrary networks, so we use IBP-trained models in our experiment. In particular, we trained three CIFAR-10 CNN models A, B, and C. Model A was trained with standard loss, Model B was trained with an IBP loss term with ramping up to 2.2/255, and Model C was trained with an IBP loss term with ramping up to 8.8/255. We use two different , 2/255 and 8/255, for each model and certifier. Q is fixed at 1%. We apply these certifiers on 1000 images from the validation dataset.
For the Monte Carlo approach, we take T ln 1/u samples from the uniform B∞(x, ) distribution where u = 0.0001 and T = 100, rounding to 921 samples per image x. Then if every sample is correctly classified, we conclude that at least 1− 1/T = 99% of the distribution is correctly classified with false positive rate less than u. This Monte Carlo approach has a higher false negative rate than other sampling approaches, but we chose this one beecause it requires the lowest number of samples as far as we are aware.
As the results in table 3 show, I-PROVEN performs better than IBP across all models, but this gap diminishes greatly in the two IBP-trained models, particularly for Model C. Similarly, we see the gap between PROVEN and I-PROVEN close and PROVEN even outperforms I-PROVEN in Model C. We see a similar phenomena in the next section, section 4.4, and provide an explanation.
Unsurprisingly, the Monte Carlo approach obtains better results than any of the relaxationbased approaches. Furthermore, in terms of timing, PROVEN/I-PROVEN took 128-131s for all 1000 images per model, while the Monte Carlo approach consistently took around 4s. Evaluating the standard error and IBP error took under a second. However, the exact details on timing do depend somewhat on the situation. In this experiment, I-PROVEN could not be batched at all as the memory used was too expensive, while Monte Carlo methods can easily be batched.
4.4 A case study on training with I-PROVEN
We examine I-PROVEN’s verification compared with IBP’s on models trained with IBP and PROVEN-IBP, respectively, on the small CNN from (Gowal et al., 2019) and the verification algorithms are considering B∞(x, 0.3). Note that IBP is considering the model’s adversarial robustness while I-PROVEN is considering the probabilistic robustness for Q = 1× 10−2 (equivalently, for 99% of the ball). We found that I-PROVEN does not obtain significantly better results than IBP in either case, mirroring our results on the IBP-trained models in table 3. We conjecture that this is due to the weight sparsity induced by IBP’s involvement in the training for both pure IBP training and PROVEN-IBP, and that this sparsity is also present in the linear bounds for I-PROVEN. Weight sparsity is beneficial for adversarial robustness, particularly for the l∞ norm (Xiao et al., 2018). However, as far as we are aware, there is no reason to expect weight sparsity to help a model’s probabilistic robustness and as noted in section 3.1, I-PROVEN’s probabilistic inequalities prefer more evenly distributed matrices.
This weight sparsity also explains why PROVEN outperforms I-PROVEN in Model C of table 3. When the model weights are sparse, the strict inequality is tighter than the probabilistic inequalities for very small qi and so I-PROVEN does not perform as well as usual by distributing Q evenly among the inequalities.
To test our hypothesis, we compared the last layer’s linear lower bounds AL between a CNN trained in a standard manner, and a CNN trained with an IBP loss as in (Gowal et al., 2019). We show this for an image from the validation set in fig. 2. These AL’s came from I-PROVEN applied with = 0.3, Q = 1× 10−2, and the failure probabilities in the last two layers non-zero and equal. Each 28× 28 grid corresponds to a logit in the output. The grid for 3 is all 0 as this is specifically considering the margin function between each class with the true class, 3. The absolute values of the matrix are scaled to fit in [0, 1]. Evidently, the values from the IBP model are far more sparse. Further examples are included in the appendix.
5 Conclusion
In this paper, we present I-PROVEN, an algorithm that can efficiently verify the probabilistic robustness of a neural network. We show strong improvements compared to the prior method used against this problem: we remove the assumptions of bounded support and significantly improve the tightness of robustness certificate without any additional cost. Furthermore, we present a training framework for probabilistic robustness and demonstrate its shortcomings. By taking a closer look at these results, we make steps towards understanding the relation between adversarial certified defense methods and our own. | 1. What is the focus of the paper regarding probabilistic certification?
2. What are the strengths of the proposed approach, particularly in comparison to prior works like PROVEN?
3. What are the weaknesses of the paper's empirical results, especially concerning the lack of comparisons with other methods?
4. How can the reviewer assess the significance of the improvement claimed by the authors without proper context and comparisons?
5. Are there any suggestions for improving the paper by including comparisons with relevant works in the field? | Summary Of The Paper
Review | Summary Of The Paper
The paper considers the problem of probabilistic certification of robustness, that is, showing that the probability that a random point in some hyperrectangle changes the classification. It improves an existing method, PROVEN, to produce a new method I-PROVEN that it shows achieves better empirical results.
Review
Strengths: An interesting problem, and improves over a previous method.
Weaknesses: Overall, I found the empirical results unconvincing because they compared only to a single existing method (which was essentially a worse version of the current method). A simple baseline would be to use a stronger verification method (e.g. https://arxiv.org/pdf/2010.11645.pdf) with Q=0, which might actually outperform the proposed method even allowing for larger Q. There are also methods for related problems that should be able to handle relatively small Q (perhaps smaller than those considered in the current paper), such as this paper: https://arxiv.org/abs/2008.10581. It is difficult to contextualize the results without comparisons or at least discussions of these methods. |
ICLR | Title
Efficient Certification for Probabilistic Robustness
Abstract
Recent developments on the robustness of neural networks have primarily emphasized the notion of worst-case adversarial robustness in both verification and robust training. However, often looser constraints are needed and some margin of error is allowed. We instead consider the task of probabilistic robustness, which assumes the input follows a known probabilistic distribution and seeks to bound the probability of a given network failing against the input. We focus on developing an efficient robustness verification algorithm by extending a bound-propagation-based approach. Our proposed algorithm improves upon the robustness certificate of this algorithm by up to 8× while with no additional computational cost. In addition, we perform a case study on incorporating the probabilistic robustness verification during training for the first time.
1 Introduction
Neural networks have found great success in a wide variety of applications. For many of these applications, understanding when and how a neural network can fail is crucial. Szegedy et al. (2014) found that almost visually imperceptible perturbations of input images could drastically change the output of a neural network. This realization set off a large area of research, both in finding ways of attacking neural networks and developing defense and certification methods against said attacks. The most common setting which is considered is the worst-case robustness given an lp norm. There have been a number of adversarial robustness certification methods which operate by finding provable lp balls for which the output of the neural network is guaranteed to be constant. In other words, they find lower bounds on the minimum lp radius for which an adversarial attack exists. This is done by relaxing the network for a bounded domain. Convex relaxations are typically used (Salman et al., 2019), although some works also consider quadratic program formulations (Raghunathan et al., 2018). Note that these are input-specific, so they do not give guarantees about the entire input domain.
As opposed to the worst-case adversarial robustness, there is also great interest in the threat model where the primary concern is natural noise and corruption rather than a malicious adversary and thus the perturbations follow some probability distribution. Typical neural networks have been found to be vulnerable to common corruptions (Dodge & Karam, 2016; Geirhos et al., 2018). We distinguish between these two notions of robustness as adversarial robustness and probabilistic robustness. We note that the terms such as corruption robustness, probability of violation, and adversarial density have also been used to refer to the general concept of probabilistic robustness. For the threat model of natural noise and corruption, there has not been as much work developed as for worst-case adversarial robustness. One of the very first probabilistic verification algorithms without imposing assumptions on the decision boundary is known as the PROVEN algorithm (Weng et al., 2019). In PROVEN, the authors derive probabilistic verification bounds based on existing worst-case robustness verification such as Fast-Lin and CROWN (Weng et al., 2018; Zhang et al., 2018).
Contributions. In this work, we generalize the PROVEN algorithm proposed by Weng et al. (2019) to infinite support and greatly improve the tightness of the robustness certificate without additional computation cost. We name our algorithm I-PROVEN, as the proposed
algorithm is a significantly improved version of PROVEN algorithm. The I-PROVEN algorithm can achieve significant improvements (2×-8× tighter) on the tightness of probabilistic robustness certificate for both MNIST and CIFAR-10 models without additional computation cost, and it also enables the certification of probability distributions with infinite support. Based on our proposed algorithm, we conduct a case study on augmenting an existing training pipeline with probabilistic robustness verification bounds, and we find mixed results for the training. We examine potential causes and implications.
2 Background and related works
Notation. In order to describe related work in depth, we will lay out some notation. We define a K-layer feed-forward ReLU neural network with n0 inputs and nK outputs f : Rn0 → RnK as
f(x) = f (K)(x)
f (i+1)(x) =W (i+1)σ(f (i)(x)) + b(i+1)
f (1)(x) =W (1)x+ b(1)
In other words, f (i)(x) denotes the vector of pre-activation values in the ith layer. Generally, we work in the setting of image classifiers where a class c is classified over a class i if fc(x) − fi(x) > 0. To simplify notation, we assume that the neural networks f which we are working with have already had this margin function applied to it for some given c, i. In other words, we assume nK = 1 for convenience and we are interested in when f(x) > 0.
2.1 Adversarial robustness verification
Adversarial robustness verification asks, given a neural network f and a region R( ) in the input space, does there exist an x ∈ R( ) such that f(x) ≤ 0? To solve this problem, we can formulate it as an equivalent optimization problem: minx∈R( ) f(x). If no such x such that f(x) ≤ 0 exists, or equivalently, if the minimum f(x) is positive, then f is robust for region R( ). If we can prove that f is robust on regions R( ) for all ≤ , then the robustness certificate is . The robustness certificate is a lower bound for the true minimum distortion ∗.
The regions R( ) of general interest are Lp balls Bp(x0, ) for a given image or input x0. This arises from the interpretation that an adversary is perturbing x0 by at most under a given Lp norm. Note that certification only informs robustness about a single image. As far as we know, it is infeasible to certify an entire dataset other than processing it image by image.
Convex relaxation for provable verification. These methods find a convex relaxation of a neural network in order to find provable certifications for its adversarial robustness. We will discuss these methods in detail as our method builds on them in certain ways. There are a number of works following these methods (Weng et al., 2018; Singh et al., 2018; Zhang et al., 2018; Singh et al., 2019b), and a general framework for them is described in (Salman et al., 2019). We will use the setup used in CROWN (Zhang et al., 2018). In these methods, inequalities on pre-activation neurons are recursively computed
l(j) ≤ A(j)L x+ b (j) L ≤ f (j)(x) ≤ A(j)U x+ b (j) U ≤ u (j)
for each layer j. Note that these are element-wise bounds, l(j) and u(j) are scalar vectors, and A
(j) L , b (j) L , A (j) U , b (j) U are linear transformations of inputs x. These linear bounds are obtained by relaxing the non-linear activation functions to linear lower and upper bounds given that the inputs to the activation functions are within some interval found from the inequalities applied to earlier layers, l(i), u(i), i < j. These inequalities are propagated backwards through the network until the original input is reached. Under the typical lp ball threat model, Hölder’s inequality can give scalar bounds on these layers and the process can continue to the final outputs. (Singh et al., 2019a; Tjandraatmadja et al., 2020) have made progress on improving these bounds beyond the convex relaxation gap pointed out by Salman et al. (2019) by considering the activation functions on multiple neurons jointly.
2.2 Probabilistic robustness
For probabilistic robustness, we are considering a known probability distribution D : Rn → [0, 1] which the inputs x are sampled from. We will focus on additive iid uniform noise which we denote, in an abuse of notation, as B∞(x, ). In other words, B∞(x, ) is the distribution generated by sampling points evenly from the hyperrectangle [x1 − , x1 + ]× [x2 − , x2 + ]× · · · × [xn − , xn + ]. Then the problem of probabilistic robustness verification is to verify that
Prx∼D [f(x) > 0] ≥ 1−Q (1)
for some given failure probability Q.
We can define the robustness certificate similarly to how it was done for adversarial robustness. Weng et al. (2019) and Anderson & Sojoudi (2020) particularly consider the maximum parameterizing D for which the above holds and we also provide such results. This is found by binary searching over , as empirically we find that the robustness is monotonic in for the distributions we consider, but we note that there are no theoretical guarantees for this.
Sampling methods. Sampling gives well-established statistical guarantees which can be applied to the problem of probabilistic robustness. By using the neural network essentially as a black-box, Chernoff bounds can estimate the probability of the neural network giving the incorrect classification given a distribution. This has the advantage of making no assumptions on the model or the distribution, but requires a large number of samples to achieve a high degree of accuracy and there is an inherent uncertainty present in the application of such an algorithm. Baluta et al. (2021) notes for example, that proving that the probability is between [0.1− 0.5× 10−4, 0.1 + 0.5× 10−4] with a confidence of 0.9 would require 5.5× 106 samples. To overcome this, they propose a framework which reduces the number of samples necessary, although this is dependent on the true probability. Anderson & Sojoudi (2020) also provide a method that can find upper bounds on the probability that a model is incorrect with a small number of samples. Webb et al. (2018) uses a clever sampling method that leverages the layered structure of common architectures. They require upwards of 107 samples but are able to obtain precise estimations. Though they are unable to provide theoretical guarantees, they show that empirically, their estimations agree with naive Monte Carlo estimates with as many as 1010 samples.
2.3 Training adversarially-robust models
Training methods for improving adversarial robustness have generally taken two paths. The first augments the data with adversarial attacks in order to strengthen a model’s resistance to such attacks (Madry et al., 2017). The second approach adds loss regularization terms that help the model learn robust features of the data. Xiao et al. (2018) identifies weight sparsity and ReLU stability as important factors in a model’s adversarial robustness and builds a training framework which incorporates these. Other works use certification methods as regularization terms in order to improve the certifiable robustness of a model. Interval bound propagation (IBP) has found great success in this despite being a relatively loose certification method (Gowal et al., 2019). In particular, the efficiency of IBP has made it amenable to training. A number of other works have made progress in closing this efficiency gap (Zhang et al., 2019; Xu et al., 2020; Shi et al., 2021; Boopathy et al., 2021).
3 Our main results
In section 3.1, we illustrate the idea of deriving tighter probabilistic robustness certificate and provide the details of our I-PROVEN algorithm. We remark on alternative setups in section 3.2. In section 3.3, we conduct a case study on including our proposed probabilistic verification bounds into standard training. All experimental results are reported in section 4.
3.1 I-PROVEN: Improving the tightness of PROVEN algorithm
In this subsection, we will show how we could build on top of the state-of-the-art PROVEN algorithm (Weng et al., 2019) to derive tighter probabilistic robustness certificate in IPROVEN. Using convex relaxation methods, one can find linear bounds with respect to the input such that,
A (K) L x+ b (K) L ≤ f(x) ≤ A (K) U x+ b (K) U , ∀x ∈ supp(D). (2)
PROVEN argues that by applying probabilistic bounds
Pr x∼D
[A (K) L x+ b (K) L ≥ 0] ≥ 1− q, (3)
we can conclude that f(x) ≥ 0 with probability at least 1 − q. Notably, probabilistic inequalities are only considered in the final layer. Thus, they are able to give a probabilistic robustness certification.
To extend this method, we must look into how these linear bounds AL, bL, AU , bU were obtained. This is done recursively: every layer’s pre-activation neurons are bounded by linear bounds with respect to the input. Then, scalar bounds are obtained using Hölder’s inequality on the linear bounds and the support of the input. These scalar bounds define intervals in the domain of each activation function for the next layer. This allows linear bounds to be calculated over the activation function and for the next layer to continue propagating linear bounds.
The key observation is that these linear bounds in previous layers can also apply with some probability rather than strictly and that the failure probabilities accumulate linearly. Assume that we have some q′ for which
Pr x∼D
[f(x) ≥ A(K)L x+ b (K) L ] ≥ 1− q ′. (4)
Then a simple union bound gives that the probability of f(x) ≥ 0 is at least 1− q − q′. This is a simple scenario which can be extended to involve all layers of the network. Theorem 1. In the convex-relaxation framework, for each scalar inequality i (l ≤ ALx+ bL or u ≥ AUx+ bU ), denote qi as the probability that this inequality is violated with respect to the probability distribution D which x is sampled from. Then the probability of the final output of the convex-relaxation algorithm holding for a given x ∼ D is ≥ 1− ∑ i qi.
Proof. The convex-relaxation framework operates by making a series of m inequalities which ultimately lead to the output layer. We can label these L1, L2, . . . , Lm. When we apply probabilistic bounds, these inequalities may not be guaranteed. We will denote Ej to be the event that Lj is correct. Even though there is the chance of failure, we will operate assuming that all inequalities were correct. If they are indeed all correct for some x, then we can conclude that f(x) ≥ l(K) for this particular x as expected. Then it suffices to find a lower bound on the probability of the intersection of the Ej ’s. We have
Pr x∼D
[ f(x) ≥ l(K) ] ≥ Pr x∼D ⋂ j Ej = 1− Pr x∼D ⋃ j Ej ≥ 1−∑ j Pr x∼D [ Ej ] = 1− ∑ j qj (5)
as desired.
Now we will take advantage of this theorem. Assign every scalar inequality i (either ALx+ bL ≥ l or AUx+ bU ≤ u) a failure probability qi which sum to Q altogether. We can invert the probabilistic inequalities to find scalars l and u such that these hold with given failure probability qi. In particular, say that functions γL(a, b) and γU (a, b) are such that
γL(a, b) ≤ Pr x∼D [ax+ b > 0] ≤ γU (a, b)
for any a, b. We want to choose l such that Pr[ALx+ bL < l] ≤ qi. Thus, it suffices to solve for l such that γL(AL, bL − l) = 1− qi.
Algorithm 1 I-PROVEN for B∞(x, ) 1: procedure PrIneq(x, , AL, bL, AU , bU , q) 2: lstrict → ALx+ bL − ||AL||1 3: ustrict → AUx+ bU + ||AU ||1 4: lprob → ALx+ bL − √ 2 ln 1q ||AL||2
5: uprob → AUx+ bU + √ 2 ln 1q ||AU ||2
6: return max(lstrict, lprob),min(ustrict, uprob) 7: end procedure 8: procedure IPROVEN(x, , f , Q) 9: q → Q/(2× number of neurons)
10: l(1), u(1) → PrIneq(x, ,W1, b1,W1, b1, q) 11: for i in [2, K] do 12: A(i)L , b (i) L , A (i) U , b (i) U → GetLinearBounds(l(:i), u(:i), A (:i) L , b (:i) L , A (:i) U , b (:i) U ) 13: l(i), u(i) → PrIneq(x, , A(i)L , b (i) L , A (i) U , b (i) U , q) 14: end for 15: return l(n) 16: end procedure
For the uniform distribution B∞(x, ), such functions γL and γU can be derived from Hoeffding’s Inequality as seen in Corollary 3.2 of (Weng et al., 2019). Inverting them, we obtain
l = ALx+ bL − √ 2 ln 1
qi ||AL||2. (6)
Performing the forward and backward bound propagation methods ultimately yields a final scalar lower bound l(n). By theorem 1, we have that
Pr x∼D [f(x) ≥ l(n)] ≥ 1− ∑ i qi = 1−Q.
We choose a simple scheme in assigning the failure probabilities. We select some subset of layers S and assign all inequalities pertaining to these layers equal qi’s. All other qi’s are set to 0. So
qi =
{ Q
2× number of neurons within layers of S if i ∈ S 0 if i 6∈ S
(7)
qi = 0 indicates using the strict bounds and is only possible when D has bounded support, as with B∞(x, ). In this case, the Hölder inequality for the l∞ norm is used. Note that the original PROVEN algorithm is equivalent to only selecting the last layer.
We found that more complex assignment strategies such as optimizing over q with sum equal to 1 did not lend themselves to any significant improvements compared to this simple method, especially given their additional run-time cost. However, there are a few factors to keep in mind when choosing the subset of layers S. In the case of uniform bounded noise, if the non-zero qi are small, it is possible for the strict inequalities, which find
lstrict = ALx+ bL − ||AL||1, ustrict = AUx+ bU + ||AU ||1 (8)
to give tighter intervals than the probabilistic inequalities.
The different weight norms also create some discrepancy in the effectiveness of the probabilistic bounds. In particular, although the matrices A in the above equations are not directly the network weights, we found that I-PROVEN performed worse on models with sparse weights. To take these into account, we return the tightest intervals using either the strict or probabilistic bounds, although we do not update the q’s to incorporate our choice.
3.2 Other distributions and certifiers
I-PROVEN can support distributions with infinite support. The only requirement which I-PROVEN has on the distribution D is that we must have lower and upper bounds on Prx∼D[ax+ b > 0] for a, b ∈ R. This may include distributions with infinite support. For example, for additive iid Gaussian noise with standard deviation , we can obtain
l = ALx+ bL − erf−1(1− 2qi) ||AL||2 (9) from basic facts about Gaussian distributions. Similar formulas can be found for Gaussian mixtures when such a distribution is known and relevant. Note that qi must be non-zero for all inequalities when dealing with distributions with infinite support, as strict alternatives no longer exist.
I-PROVEN can be used with any linear-relaxation-based certifier with no additional computational cost. As I-PROVEN only requires changing the evaluation of the scalar bounds, it has no additional time complexity cost to whatever method it is being used with. It can be incorporated in any such linear-relaxation method. For a network with K layers and n neurons at each layer, the entire method when used with CROWN is O(n3K2) (Zhang et al., 2018), an additional factor of K compared to a pass of the network.
3.3 Probabilistic verification based training
Our training method is simply a substitution of PROVEN into CROWN-IBP (Zhang et al., 2019). It requires two forward passes. The first computes strict bounds on each pre-activation neuron using IBP. The second performs linear bound propagations to compute linear bounds on the output. Then probabilistic bounds are applied to obtain the final scalar lower and upper bounds which are used in the loss function as described in CROWN-IBP’s original paper. Note that this effectively means that only the original PROVEN is applied in this training method.
4 Experiments
4.1 Implementation details
We performed experiments on the MNIST and CIFAR-10 datasets (LeCun et al., 2010; Krizhevsky, 2009). For our verification experiments, we used pretrained models provided in (Weng et al., 2018) and (Zhang et al., 2018) and the images from each dataset were scaled so that their values were in [−0.5, 0.5]. For our training experiments, we used the architectures described in (Gowal et al., 2019) and the images in the dataset were scaled to be in [0, 1]. Our MNIST models were trained for 100 epochs with a batch size of 100 and with a warm-up period for the first 5 epochs and a ramp-up period for the next 45. We followed the β and κ schedule used in (Zhang et al., 2019) for the PROVEN-IBP loss and we increase our from 0 to 4× 10−1 during the ramp-up period. We evaluate the validation set using = 3× 10−1. All code was written in Python with the use of the PyTorch library (Paszke et al., 2019) and training was conducted on a NVIDIA Tesla V100 GPU.
4.2 Verification results
We compare the original PROVEN bounds with our improved PROVEN on various MNIST and CIFAR-10 classifier in table 1. The classifier are k-layer MLPs with n neurons in each layer, denoted as k × [n]. In I-PROVEN, qi are all non-zero and equal across all layers. It achieves much better performance than the original PROVEN (Weng et al., 2019) with a 600% increase for 99.99% probabilistic robustness. Note that Q = 0 is simply the adversarial robustness certificate.
I-PROVEN’s improvement over PROVEN can be observed from intermediate layers’ bounds. In table 2, the average interval gap of the neurons in a layer is tabulated. The model in particular is a 4-layer MNIST MLP (layer 0 indicates the input, which is why the interval length is simply 2 ). PROVEN and I-PROVEN are each evaluated on 10 images for various
,Q. The scalar intervals l, u which give bounds on each neuron in both methods are averaged within each layer. The smaller the interval size (that is, the tighter the interval), the better. I-PROVEN’s intervals are noticeably tighter from the first layer onwards as it invests a portion of the total failure probability Q to tighten these earlier inequalities. This does mean PROVEN’s final inequalities are sharper than I-PROVEN’s, which we can notice by observing the ratio of interval widths between Layer 3 and Layer 4. For = 0.001, Q = 0.1 for example, PROVEN goes from 0.045 to 0.564, a 12.5× increase, while I-PROVEN goes from 0.09 to 0.203, a 22.5× increase. However, the improvements I-PROVEN makes in the earlier layers means it still ends up with tighter final bounds than PROVEN.
Note that we can our method is also able to handle additive Gaussian noise with only small changes to our probabilistic inequalities. Notably, we do not need to truncate the Gaussian distribution as (Weng et al., 2019) did. We plot our results for both additive uniform noise and additive Gaussian noise in fig. 1.
4.3 Comparison to other methods
In the earlier section, we compared I-PROVEN to PROVEN in terms of their robustness certificates. Now, we will show comparisons to IBP and a simple Monte Carlo approach based on (Anderson & Sojoudi, 2020). IBP does not generally perform well for arbitrary networks, so we use IBP-trained models in our experiment. In particular, we trained three CIFAR-10 CNN models A, B, and C. Model A was trained with standard loss, Model B was trained with an IBP loss term with ramping up to 2.2/255, and Model C was trained with an IBP loss term with ramping up to 8.8/255. We use two different , 2/255 and 8/255, for each model and certifier. Q is fixed at 1%. We apply these certifiers on 1000 images from the validation dataset.
For the Monte Carlo approach, we take T ln 1/u samples from the uniform B∞(x, ) distribution where u = 0.0001 and T = 100, rounding to 921 samples per image x. Then if every sample is correctly classified, we conclude that at least 1− 1/T = 99% of the distribution is correctly classified with false positive rate less than u. This Monte Carlo approach has a higher false negative rate than other sampling approaches, but we chose this one beecause it requires the lowest number of samples as far as we are aware.
As the results in table 3 show, I-PROVEN performs better than IBP across all models, but this gap diminishes greatly in the two IBP-trained models, particularly for Model C. Similarly, we see the gap between PROVEN and I-PROVEN close and PROVEN even outperforms I-PROVEN in Model C. We see a similar phenomena in the next section, section 4.4, and provide an explanation.
Unsurprisingly, the Monte Carlo approach obtains better results than any of the relaxationbased approaches. Furthermore, in terms of timing, PROVEN/I-PROVEN took 128-131s for all 1000 images per model, while the Monte Carlo approach consistently took around 4s. Evaluating the standard error and IBP error took under a second. However, the exact details on timing do depend somewhat on the situation. In this experiment, I-PROVEN could not be batched at all as the memory used was too expensive, while Monte Carlo methods can easily be batched.
4.4 A case study on training with I-PROVEN
We examine I-PROVEN’s verification compared with IBP’s on models trained with IBP and PROVEN-IBP, respectively, on the small CNN from (Gowal et al., 2019) and the verification algorithms are considering B∞(x, 0.3). Note that IBP is considering the model’s adversarial robustness while I-PROVEN is considering the probabilistic robustness for Q = 1× 10−2 (equivalently, for 99% of the ball). We found that I-PROVEN does not obtain significantly better results than IBP in either case, mirroring our results on the IBP-trained models in table 3. We conjecture that this is due to the weight sparsity induced by IBP’s involvement in the training for both pure IBP training and PROVEN-IBP, and that this sparsity is also present in the linear bounds for I-PROVEN. Weight sparsity is beneficial for adversarial robustness, particularly for the l∞ norm (Xiao et al., 2018). However, as far as we are aware, there is no reason to expect weight sparsity to help a model’s probabilistic robustness and as noted in section 3.1, I-PROVEN’s probabilistic inequalities prefer more evenly distributed matrices.
This weight sparsity also explains why PROVEN outperforms I-PROVEN in Model C of table 3. When the model weights are sparse, the strict inequality is tighter than the probabilistic inequalities for very small qi and so I-PROVEN does not perform as well as usual by distributing Q evenly among the inequalities.
To test our hypothesis, we compared the last layer’s linear lower bounds AL between a CNN trained in a standard manner, and a CNN trained with an IBP loss as in (Gowal et al., 2019). We show this for an image from the validation set in fig. 2. These AL’s came from I-PROVEN applied with = 0.3, Q = 1× 10−2, and the failure probabilities in the last two layers non-zero and equal. Each 28× 28 grid corresponds to a logit in the output. The grid for 3 is all 0 as this is specifically considering the margin function between each class with the true class, 3. The absolute values of the matrix are scaled to fit in [0, 1]. Evidently, the values from the IBP model are far more sparse. Further examples are included in the appendix.
5 Conclusion
In this paper, we present I-PROVEN, an algorithm that can efficiently verify the probabilistic robustness of a neural network. We show strong improvements compared to the prior method used against this problem: we remove the assumptions of bounded support and significantly improve the tightness of robustness certificate without any additional cost. Furthermore, we present a training framework for probabilistic robustness and demonstrate its shortcomings. By taking a closer look at these results, we make steps towards understanding the relation between adversarial certified defense methods and our own. | 1. What is the focus of the paper regarding statistical/probabilistic robustness?
2. What are the strengths of the proposed approach, particularly in its clear exposition, goals, and methodology?
3. What are the weaknesses of the paper, especially regarding practicality and comparison with other methods?
4. How does the reviewer assess the trade-off between computational complexity and tightness of statistical bounds?
5. What is the limitation of the extension to infinite support noise distributions?
6. How does the reviewer suggest comparing the proposed method with other approaches such as randomized smoothing of classifiers and MACER algorithm? | Summary Of The Paper
Review | Summary Of The Paper
In this paper, the authors consider a notion of statistical/probabilistic robustness which does not require a model to be robust to all inputs in a specified set, only a certain, high-probability subset of these inputs. The authors rely on a bound propagation methodology to compute the the probability that a given input property is violated. In particular, the authors expand a known methodology (PROVEN) for computing probabilistic robustness and show that in many cases their methodology is better than that of PROVEN.
Review
I think the pros of this paper are its clear exposition, goals, and methodology. In particular, the problem of adversarial examples is a serious one and continuing to work on expanding adversarial robustness guarantees to larger models is a avenue of research that can have clear impact. Further the goals and method of the paper are clearly explained. Finally, the proposed algorithm does have some key empirical advantages over the PROVEN algorithm, often gaining significantly better certified radii.
The cons of this paper lie in questions of its practicality compared not just to the PROVEN algorithm but to sampling based methods. For both PROVEN and I-PROVEN I find the use of convex relaxation a questionable choice due to several limitations. Firstly, these relaxations are known to introduce non-trivial over-approximations in the output. This is, of course, acceptable when performing rigorous safety verification, but given that the guarantees desired in this work are only statistical/probabilistic performing such relaxations (especially on large models on CIFAR) might introduce unnecessary over-approximation which leads to vastly conservative estimates in safety radius. I would be greatly surprised if this method was able to produce better radius than statistical methods given that they do not introduce such over approximations. Thus, there is an unexplored trade-off in a few directions that I think limits the impact of this paper. In particular, my questions are (1) how much more costly is this method (in terms of computational complexity and time) than the standard PROVEN algorithm? Just above subsection 3.3 they note its complexity relative to CROWN and I suspect that this complexity comparison will hold compared to PROVEN (that it is only a linear factor slower); however, it would be good to get an idea of how much slower this is in practice for their large CIFAR networks. (2) How much tighter are the statistical bounds? I expect them to be a great deal tighter, but also much more expensive to compute. It would be interesting to see what the trade off is here. Clearly I-PROVEN should theoretically fall in between PROVEN and sampling methods in terms of its tightness and its computational time, yet the authors do not explore this.
Moreover, I think the extension to infinite support noise distributions is a rather weak extension. Clearly, PROVEN can also truncate a Gaussian and make the same erf function argument that is used here without any issue, so I do not see this as an extension unique to the proposed methodology. Of course, for statistical methods, infinite support is just fine, and so are poorly defined densities (i.e. distributions which can only be efficiently sampled from) which is something this method cannot support and this should be noted as a limitation.
Finally, I think it would be interesting to compare the training method proposed here to randomized smoothing of classifiers and the MACER algorithm for training of robust randomized smoothing classifiers. |
ICLR | Title
Efficient Certification for Probabilistic Robustness
Abstract
Recent developments on the robustness of neural networks have primarily emphasized the notion of worst-case adversarial robustness in both verification and robust training. However, often looser constraints are needed and some margin of error is allowed. We instead consider the task of probabilistic robustness, which assumes the input follows a known probabilistic distribution and seeks to bound the probability of a given network failing against the input. We focus on developing an efficient robustness verification algorithm by extending a bound-propagation-based approach. Our proposed algorithm improves upon the robustness certificate of this algorithm by up to 8× while with no additional computational cost. In addition, we perform a case study on incorporating the probabilistic robustness verification during training for the first time.
1 Introduction
Neural networks have found great success in a wide variety of applications. For many of these applications, understanding when and how a neural network can fail is crucial. Szegedy et al. (2014) found that almost visually imperceptible perturbations of input images could drastically change the output of a neural network. This realization set off a large area of research, both in finding ways of attacking neural networks and developing defense and certification methods against said attacks. The most common setting which is considered is the worst-case robustness given an lp norm. There have been a number of adversarial robustness certification methods which operate by finding provable lp balls for which the output of the neural network is guaranteed to be constant. In other words, they find lower bounds on the minimum lp radius for which an adversarial attack exists. This is done by relaxing the network for a bounded domain. Convex relaxations are typically used (Salman et al., 2019), although some works also consider quadratic program formulations (Raghunathan et al., 2018). Note that these are input-specific, so they do not give guarantees about the entire input domain.
As opposed to the worst-case adversarial robustness, there is also great interest in the threat model where the primary concern is natural noise and corruption rather than a malicious adversary and thus the perturbations follow some probability distribution. Typical neural networks have been found to be vulnerable to common corruptions (Dodge & Karam, 2016; Geirhos et al., 2018). We distinguish between these two notions of robustness as adversarial robustness and probabilistic robustness. We note that the terms such as corruption robustness, probability of violation, and adversarial density have also been used to refer to the general concept of probabilistic robustness. For the threat model of natural noise and corruption, there has not been as much work developed as for worst-case adversarial robustness. One of the very first probabilistic verification algorithms without imposing assumptions on the decision boundary is known as the PROVEN algorithm (Weng et al., 2019). In PROVEN, the authors derive probabilistic verification bounds based on existing worst-case robustness verification such as Fast-Lin and CROWN (Weng et al., 2018; Zhang et al., 2018).
Contributions. In this work, we generalize the PROVEN algorithm proposed by Weng et al. (2019) to infinite support and greatly improve the tightness of the robustness certificate without additional computation cost. We name our algorithm I-PROVEN, as the proposed
algorithm is a significantly improved version of PROVEN algorithm. The I-PROVEN algorithm can achieve significant improvements (2×-8× tighter) on the tightness of probabilistic robustness certificate for both MNIST and CIFAR-10 models without additional computation cost, and it also enables the certification of probability distributions with infinite support. Based on our proposed algorithm, we conduct a case study on augmenting an existing training pipeline with probabilistic robustness verification bounds, and we find mixed results for the training. We examine potential causes and implications.
2 Background and related works
Notation. In order to describe related work in depth, we will lay out some notation. We define a K-layer feed-forward ReLU neural network with n0 inputs and nK outputs f : Rn0 → RnK as
f(x) = f (K)(x)
f (i+1)(x) =W (i+1)σ(f (i)(x)) + b(i+1)
f (1)(x) =W (1)x+ b(1)
In other words, f (i)(x) denotes the vector of pre-activation values in the ith layer. Generally, we work in the setting of image classifiers where a class c is classified over a class i if fc(x) − fi(x) > 0. To simplify notation, we assume that the neural networks f which we are working with have already had this margin function applied to it for some given c, i. In other words, we assume nK = 1 for convenience and we are interested in when f(x) > 0.
2.1 Adversarial robustness verification
Adversarial robustness verification asks, given a neural network f and a region R( ) in the input space, does there exist an x ∈ R( ) such that f(x) ≤ 0? To solve this problem, we can formulate it as an equivalent optimization problem: minx∈R( ) f(x). If no such x such that f(x) ≤ 0 exists, or equivalently, if the minimum f(x) is positive, then f is robust for region R( ). If we can prove that f is robust on regions R( ) for all ≤ , then the robustness certificate is . The robustness certificate is a lower bound for the true minimum distortion ∗.
The regions R( ) of general interest are Lp balls Bp(x0, ) for a given image or input x0. This arises from the interpretation that an adversary is perturbing x0 by at most under a given Lp norm. Note that certification only informs robustness about a single image. As far as we know, it is infeasible to certify an entire dataset other than processing it image by image.
Convex relaxation for provable verification. These methods find a convex relaxation of a neural network in order to find provable certifications for its adversarial robustness. We will discuss these methods in detail as our method builds on them in certain ways. There are a number of works following these methods (Weng et al., 2018; Singh et al., 2018; Zhang et al., 2018; Singh et al., 2019b), and a general framework for them is described in (Salman et al., 2019). We will use the setup used in CROWN (Zhang et al., 2018). In these methods, inequalities on pre-activation neurons are recursively computed
l(j) ≤ A(j)L x+ b (j) L ≤ f (j)(x) ≤ A(j)U x+ b (j) U ≤ u (j)
for each layer j. Note that these are element-wise bounds, l(j) and u(j) are scalar vectors, and A
(j) L , b (j) L , A (j) U , b (j) U are linear transformations of inputs x. These linear bounds are obtained by relaxing the non-linear activation functions to linear lower and upper bounds given that the inputs to the activation functions are within some interval found from the inequalities applied to earlier layers, l(i), u(i), i < j. These inequalities are propagated backwards through the network until the original input is reached. Under the typical lp ball threat model, Hölder’s inequality can give scalar bounds on these layers and the process can continue to the final outputs. (Singh et al., 2019a; Tjandraatmadja et al., 2020) have made progress on improving these bounds beyond the convex relaxation gap pointed out by Salman et al. (2019) by considering the activation functions on multiple neurons jointly.
2.2 Probabilistic robustness
For probabilistic robustness, we are considering a known probability distribution D : Rn → [0, 1] which the inputs x are sampled from. We will focus on additive iid uniform noise which we denote, in an abuse of notation, as B∞(x, ). In other words, B∞(x, ) is the distribution generated by sampling points evenly from the hyperrectangle [x1 − , x1 + ]× [x2 − , x2 + ]× · · · × [xn − , xn + ]. Then the problem of probabilistic robustness verification is to verify that
Prx∼D [f(x) > 0] ≥ 1−Q (1)
for some given failure probability Q.
We can define the robustness certificate similarly to how it was done for adversarial robustness. Weng et al. (2019) and Anderson & Sojoudi (2020) particularly consider the maximum parameterizing D for which the above holds and we also provide such results. This is found by binary searching over , as empirically we find that the robustness is monotonic in for the distributions we consider, but we note that there are no theoretical guarantees for this.
Sampling methods. Sampling gives well-established statistical guarantees which can be applied to the problem of probabilistic robustness. By using the neural network essentially as a black-box, Chernoff bounds can estimate the probability of the neural network giving the incorrect classification given a distribution. This has the advantage of making no assumptions on the model or the distribution, but requires a large number of samples to achieve a high degree of accuracy and there is an inherent uncertainty present in the application of such an algorithm. Baluta et al. (2021) notes for example, that proving that the probability is between [0.1− 0.5× 10−4, 0.1 + 0.5× 10−4] with a confidence of 0.9 would require 5.5× 106 samples. To overcome this, they propose a framework which reduces the number of samples necessary, although this is dependent on the true probability. Anderson & Sojoudi (2020) also provide a method that can find upper bounds on the probability that a model is incorrect with a small number of samples. Webb et al. (2018) uses a clever sampling method that leverages the layered structure of common architectures. They require upwards of 107 samples but are able to obtain precise estimations. Though they are unable to provide theoretical guarantees, they show that empirically, their estimations agree with naive Monte Carlo estimates with as many as 1010 samples.
2.3 Training adversarially-robust models
Training methods for improving adversarial robustness have generally taken two paths. The first augments the data with adversarial attacks in order to strengthen a model’s resistance to such attacks (Madry et al., 2017). The second approach adds loss regularization terms that help the model learn robust features of the data. Xiao et al. (2018) identifies weight sparsity and ReLU stability as important factors in a model’s adversarial robustness and builds a training framework which incorporates these. Other works use certification methods as regularization terms in order to improve the certifiable robustness of a model. Interval bound propagation (IBP) has found great success in this despite being a relatively loose certification method (Gowal et al., 2019). In particular, the efficiency of IBP has made it amenable to training. A number of other works have made progress in closing this efficiency gap (Zhang et al., 2019; Xu et al., 2020; Shi et al., 2021; Boopathy et al., 2021).
3 Our main results
In section 3.1, we illustrate the idea of deriving tighter probabilistic robustness certificate and provide the details of our I-PROVEN algorithm. We remark on alternative setups in section 3.2. In section 3.3, we conduct a case study on including our proposed probabilistic verification bounds into standard training. All experimental results are reported in section 4.
3.1 I-PROVEN: Improving the tightness of PROVEN algorithm
In this subsection, we will show how we could build on top of the state-of-the-art PROVEN algorithm (Weng et al., 2019) to derive tighter probabilistic robustness certificate in IPROVEN. Using convex relaxation methods, one can find linear bounds with respect to the input such that,
A (K) L x+ b (K) L ≤ f(x) ≤ A (K) U x+ b (K) U , ∀x ∈ supp(D). (2)
PROVEN argues that by applying probabilistic bounds
Pr x∼D
[A (K) L x+ b (K) L ≥ 0] ≥ 1− q, (3)
we can conclude that f(x) ≥ 0 with probability at least 1 − q. Notably, probabilistic inequalities are only considered in the final layer. Thus, they are able to give a probabilistic robustness certification.
To extend this method, we must look into how these linear bounds AL, bL, AU , bU were obtained. This is done recursively: every layer’s pre-activation neurons are bounded by linear bounds with respect to the input. Then, scalar bounds are obtained using Hölder’s inequality on the linear bounds and the support of the input. These scalar bounds define intervals in the domain of each activation function for the next layer. This allows linear bounds to be calculated over the activation function and for the next layer to continue propagating linear bounds.
The key observation is that these linear bounds in previous layers can also apply with some probability rather than strictly and that the failure probabilities accumulate linearly. Assume that we have some q′ for which
Pr x∼D
[f(x) ≥ A(K)L x+ b (K) L ] ≥ 1− q ′. (4)
Then a simple union bound gives that the probability of f(x) ≥ 0 is at least 1− q − q′. This is a simple scenario which can be extended to involve all layers of the network. Theorem 1. In the convex-relaxation framework, for each scalar inequality i (l ≤ ALx+ bL or u ≥ AUx+ bU ), denote qi as the probability that this inequality is violated with respect to the probability distribution D which x is sampled from. Then the probability of the final output of the convex-relaxation algorithm holding for a given x ∼ D is ≥ 1− ∑ i qi.
Proof. The convex-relaxation framework operates by making a series of m inequalities which ultimately lead to the output layer. We can label these L1, L2, . . . , Lm. When we apply probabilistic bounds, these inequalities may not be guaranteed. We will denote Ej to be the event that Lj is correct. Even though there is the chance of failure, we will operate assuming that all inequalities were correct. If they are indeed all correct for some x, then we can conclude that f(x) ≥ l(K) for this particular x as expected. Then it suffices to find a lower bound on the probability of the intersection of the Ej ’s. We have
Pr x∼D
[ f(x) ≥ l(K) ] ≥ Pr x∼D ⋂ j Ej = 1− Pr x∼D ⋃ j Ej ≥ 1−∑ j Pr x∼D [ Ej ] = 1− ∑ j qj (5)
as desired.
Now we will take advantage of this theorem. Assign every scalar inequality i (either ALx+ bL ≥ l or AUx+ bU ≤ u) a failure probability qi which sum to Q altogether. We can invert the probabilistic inequalities to find scalars l and u such that these hold with given failure probability qi. In particular, say that functions γL(a, b) and γU (a, b) are such that
γL(a, b) ≤ Pr x∼D [ax+ b > 0] ≤ γU (a, b)
for any a, b. We want to choose l such that Pr[ALx+ bL < l] ≤ qi. Thus, it suffices to solve for l such that γL(AL, bL − l) = 1− qi.
Algorithm 1 I-PROVEN for B∞(x, ) 1: procedure PrIneq(x, , AL, bL, AU , bU , q) 2: lstrict → ALx+ bL − ||AL||1 3: ustrict → AUx+ bU + ||AU ||1 4: lprob → ALx+ bL − √ 2 ln 1q ||AL||2
5: uprob → AUx+ bU + √ 2 ln 1q ||AU ||2
6: return max(lstrict, lprob),min(ustrict, uprob) 7: end procedure 8: procedure IPROVEN(x, , f , Q) 9: q → Q/(2× number of neurons)
10: l(1), u(1) → PrIneq(x, ,W1, b1,W1, b1, q) 11: for i in [2, K] do 12: A(i)L , b (i) L , A (i) U , b (i) U → GetLinearBounds(l(:i), u(:i), A (:i) L , b (:i) L , A (:i) U , b (:i) U ) 13: l(i), u(i) → PrIneq(x, , A(i)L , b (i) L , A (i) U , b (i) U , q) 14: end for 15: return l(n) 16: end procedure
For the uniform distribution B∞(x, ), such functions γL and γU can be derived from Hoeffding’s Inequality as seen in Corollary 3.2 of (Weng et al., 2019). Inverting them, we obtain
l = ALx+ bL − √ 2 ln 1
qi ||AL||2. (6)
Performing the forward and backward bound propagation methods ultimately yields a final scalar lower bound l(n). By theorem 1, we have that
Pr x∼D [f(x) ≥ l(n)] ≥ 1− ∑ i qi = 1−Q.
We choose a simple scheme in assigning the failure probabilities. We select some subset of layers S and assign all inequalities pertaining to these layers equal qi’s. All other qi’s are set to 0. So
qi =
{ Q
2× number of neurons within layers of S if i ∈ S 0 if i 6∈ S
(7)
qi = 0 indicates using the strict bounds and is only possible when D has bounded support, as with B∞(x, ). In this case, the Hölder inequality for the l∞ norm is used. Note that the original PROVEN algorithm is equivalent to only selecting the last layer.
We found that more complex assignment strategies such as optimizing over q with sum equal to 1 did not lend themselves to any significant improvements compared to this simple method, especially given their additional run-time cost. However, there are a few factors to keep in mind when choosing the subset of layers S. In the case of uniform bounded noise, if the non-zero qi are small, it is possible for the strict inequalities, which find
lstrict = ALx+ bL − ||AL||1, ustrict = AUx+ bU + ||AU ||1 (8)
to give tighter intervals than the probabilistic inequalities.
The different weight norms also create some discrepancy in the effectiveness of the probabilistic bounds. In particular, although the matrices A in the above equations are not directly the network weights, we found that I-PROVEN performed worse on models with sparse weights. To take these into account, we return the tightest intervals using either the strict or probabilistic bounds, although we do not update the q’s to incorporate our choice.
3.2 Other distributions and certifiers
I-PROVEN can support distributions with infinite support. The only requirement which I-PROVEN has on the distribution D is that we must have lower and upper bounds on Prx∼D[ax+ b > 0] for a, b ∈ R. This may include distributions with infinite support. For example, for additive iid Gaussian noise with standard deviation , we can obtain
l = ALx+ bL − erf−1(1− 2qi) ||AL||2 (9) from basic facts about Gaussian distributions. Similar formulas can be found for Gaussian mixtures when such a distribution is known and relevant. Note that qi must be non-zero for all inequalities when dealing with distributions with infinite support, as strict alternatives no longer exist.
I-PROVEN can be used with any linear-relaxation-based certifier with no additional computational cost. As I-PROVEN only requires changing the evaluation of the scalar bounds, it has no additional time complexity cost to whatever method it is being used with. It can be incorporated in any such linear-relaxation method. For a network with K layers and n neurons at each layer, the entire method when used with CROWN is O(n3K2) (Zhang et al., 2018), an additional factor of K compared to a pass of the network.
3.3 Probabilistic verification based training
Our training method is simply a substitution of PROVEN into CROWN-IBP (Zhang et al., 2019). It requires two forward passes. The first computes strict bounds on each pre-activation neuron using IBP. The second performs linear bound propagations to compute linear bounds on the output. Then probabilistic bounds are applied to obtain the final scalar lower and upper bounds which are used in the loss function as described in CROWN-IBP’s original paper. Note that this effectively means that only the original PROVEN is applied in this training method.
4 Experiments
4.1 Implementation details
We performed experiments on the MNIST and CIFAR-10 datasets (LeCun et al., 2010; Krizhevsky, 2009). For our verification experiments, we used pretrained models provided in (Weng et al., 2018) and (Zhang et al., 2018) and the images from each dataset were scaled so that their values were in [−0.5, 0.5]. For our training experiments, we used the architectures described in (Gowal et al., 2019) and the images in the dataset were scaled to be in [0, 1]. Our MNIST models were trained for 100 epochs with a batch size of 100 and with a warm-up period for the first 5 epochs and a ramp-up period for the next 45. We followed the β and κ schedule used in (Zhang et al., 2019) for the PROVEN-IBP loss and we increase our from 0 to 4× 10−1 during the ramp-up period. We evaluate the validation set using = 3× 10−1. All code was written in Python with the use of the PyTorch library (Paszke et al., 2019) and training was conducted on a NVIDIA Tesla V100 GPU.
4.2 Verification results
We compare the original PROVEN bounds with our improved PROVEN on various MNIST and CIFAR-10 classifier in table 1. The classifier are k-layer MLPs with n neurons in each layer, denoted as k × [n]. In I-PROVEN, qi are all non-zero and equal across all layers. It achieves much better performance than the original PROVEN (Weng et al., 2019) with a 600% increase for 99.99% probabilistic robustness. Note that Q = 0 is simply the adversarial robustness certificate.
I-PROVEN’s improvement over PROVEN can be observed from intermediate layers’ bounds. In table 2, the average interval gap of the neurons in a layer is tabulated. The model in particular is a 4-layer MNIST MLP (layer 0 indicates the input, which is why the interval length is simply 2 ). PROVEN and I-PROVEN are each evaluated on 10 images for various
,Q. The scalar intervals l, u which give bounds on each neuron in both methods are averaged within each layer. The smaller the interval size (that is, the tighter the interval), the better. I-PROVEN’s intervals are noticeably tighter from the first layer onwards as it invests a portion of the total failure probability Q to tighten these earlier inequalities. This does mean PROVEN’s final inequalities are sharper than I-PROVEN’s, which we can notice by observing the ratio of interval widths between Layer 3 and Layer 4. For = 0.001, Q = 0.1 for example, PROVEN goes from 0.045 to 0.564, a 12.5× increase, while I-PROVEN goes from 0.09 to 0.203, a 22.5× increase. However, the improvements I-PROVEN makes in the earlier layers means it still ends up with tighter final bounds than PROVEN.
Note that we can our method is also able to handle additive Gaussian noise with only small changes to our probabilistic inequalities. Notably, we do not need to truncate the Gaussian distribution as (Weng et al., 2019) did. We plot our results for both additive uniform noise and additive Gaussian noise in fig. 1.
4.3 Comparison to other methods
In the earlier section, we compared I-PROVEN to PROVEN in terms of their robustness certificates. Now, we will show comparisons to IBP and a simple Monte Carlo approach based on (Anderson & Sojoudi, 2020). IBP does not generally perform well for arbitrary networks, so we use IBP-trained models in our experiment. In particular, we trained three CIFAR-10 CNN models A, B, and C. Model A was trained with standard loss, Model B was trained with an IBP loss term with ramping up to 2.2/255, and Model C was trained with an IBP loss term with ramping up to 8.8/255. We use two different , 2/255 and 8/255, for each model and certifier. Q is fixed at 1%. We apply these certifiers on 1000 images from the validation dataset.
For the Monte Carlo approach, we take T ln 1/u samples from the uniform B∞(x, ) distribution where u = 0.0001 and T = 100, rounding to 921 samples per image x. Then if every sample is correctly classified, we conclude that at least 1− 1/T = 99% of the distribution is correctly classified with false positive rate less than u. This Monte Carlo approach has a higher false negative rate than other sampling approaches, but we chose this one beecause it requires the lowest number of samples as far as we are aware.
As the results in table 3 show, I-PROVEN performs better than IBP across all models, but this gap diminishes greatly in the two IBP-trained models, particularly for Model C. Similarly, we see the gap between PROVEN and I-PROVEN close and PROVEN even outperforms I-PROVEN in Model C. We see a similar phenomena in the next section, section 4.4, and provide an explanation.
Unsurprisingly, the Monte Carlo approach obtains better results than any of the relaxationbased approaches. Furthermore, in terms of timing, PROVEN/I-PROVEN took 128-131s for all 1000 images per model, while the Monte Carlo approach consistently took around 4s. Evaluating the standard error and IBP error took under a second. However, the exact details on timing do depend somewhat on the situation. In this experiment, I-PROVEN could not be batched at all as the memory used was too expensive, while Monte Carlo methods can easily be batched.
4.4 A case study on training with I-PROVEN
We examine I-PROVEN’s verification compared with IBP’s on models trained with IBP and PROVEN-IBP, respectively, on the small CNN from (Gowal et al., 2019) and the verification algorithms are considering B∞(x, 0.3). Note that IBP is considering the model’s adversarial robustness while I-PROVEN is considering the probabilistic robustness for Q = 1× 10−2 (equivalently, for 99% of the ball). We found that I-PROVEN does not obtain significantly better results than IBP in either case, mirroring our results on the IBP-trained models in table 3. We conjecture that this is due to the weight sparsity induced by IBP’s involvement in the training for both pure IBP training and PROVEN-IBP, and that this sparsity is also present in the linear bounds for I-PROVEN. Weight sparsity is beneficial for adversarial robustness, particularly for the l∞ norm (Xiao et al., 2018). However, as far as we are aware, there is no reason to expect weight sparsity to help a model’s probabilistic robustness and as noted in section 3.1, I-PROVEN’s probabilistic inequalities prefer more evenly distributed matrices.
This weight sparsity also explains why PROVEN outperforms I-PROVEN in Model C of table 3. When the model weights are sparse, the strict inequality is tighter than the probabilistic inequalities for very small qi and so I-PROVEN does not perform as well as usual by distributing Q evenly among the inequalities.
To test our hypothesis, we compared the last layer’s linear lower bounds AL between a CNN trained in a standard manner, and a CNN trained with an IBP loss as in (Gowal et al., 2019). We show this for an image from the validation set in fig. 2. These AL’s came from I-PROVEN applied with = 0.3, Q = 1× 10−2, and the failure probabilities in the last two layers non-zero and equal. Each 28× 28 grid corresponds to a logit in the output. The grid for 3 is all 0 as this is specifically considering the margin function between each class with the true class, 3. The absolute values of the matrix are scaled to fit in [0, 1]. Evidently, the values from the IBP model are far more sparse. Further examples are included in the appendix.
5 Conclusion
In this paper, we present I-PROVEN, an algorithm that can efficiently verify the probabilistic robustness of a neural network. We show strong improvements compared to the prior method used against this problem: we remove the assumptions of bounded support and significantly improve the tightness of robustness certificate without any additional cost. Furthermore, we present a training framework for probabilistic robustness and demonstrate its shortcomings. By taking a closer look at these results, we make steps towards understanding the relation between adversarial certified defense methods and our own. | 1. What is the focus of the paper regarding local robustness, and how does it build upon prior work?
2. What are the strengths and weaknesses of the proposed approach in terms of its ability to improve over previous methods?
3. How does the reviewer assess the relevance and motivation of the problem setting considered in the paper?
4. Why does the reviewer suggest comparing the proposed method with Chernoff bound-based approaches, and what are the limitations of convex relaxation-based methods?
5. How does randomized smoothing relate to the threat model of probabilistic robustness considered in the paper, and could it be applied to this setting? | Summary Of The Paper
Review | Summary Of The Paper
This work considers problem of local robustness where each input is perturbed according to some probability distribution (e.g. uniform distribution over the L-infinity ball). Proposed approach is based on extending an approach proposed by prior work which uses linear bounds to compute bounds on the output of the network. The key contribution is computing the probabilistic bounds for inner layers, and not only the final one as was done in prior work. Authors show that their approach improves over prior work on MNIST and CIFAR-10 datasets.
Review
Overall, the paper seems to be an incremental improvement over PROVEN algorithm proposed by Weng et al. While I like the general approach of combining several linear relaxations with union bound, and thus improving the results, I also have some concerns, which I list below.
I am not sure that the threat model considered in this work is that relevant. I think authors should explain what would be some setting in which adversary is performing these probabilistic local perturbations. Right now, it seems to me that the setting is quite artificial, and there does not seem to be that much interest in the literature in solving this problem, except the work of Weng et al. Could authors motivate their problem setting better?
In the background, authors mention some methods based on Chernoff bound, and argue that these methods need high number of samples to achieve a high degree of accuracy. Why are these methods, e.g. the one proposed by Baluta et al., not compared to in the experimental evaluation? Without this, I think it is not actually clear whether these methods do not scale.
Could authors discuss limitations of their approach? Right now it seems that, as all of the convex relaxation based approaches, this approach does not scale to large networks, while e.g. Chernoff bound based approach is essentially independent of the architecture.
Could authors discuss relationship between the proposed method and randomized smoothing [1, 2]? Could randomized smoothing be applied to the threat model of probabilistic robustness that authors consider in this work? Fundamental limitation of approaches based on convex relaxations is that they do not scale to large networks, and randomized smoothing has worked well for achieving provable (deterministic) local adversarial robustness even on large networks and datasets, so it would be interesting to know whether it can work in this setting as well. Especially given the fact that guarantees are already probabilistic, randomized smoothing might be better tool to solve this problem than convex relaxation based approaches.
Typos:
"Note that we can our method"
[1] Lecuyer, Mathias, et al. "Certified robustness to adversarial examples with differential privacy." 2019 IEEE Symposium on Security and Privacy (SP). IEEE, 2019.
[2] Cohen, Jeremy, Elan Rosenfeld, and Zico Kolter. "Certified adversarial robustness via randomized smoothing." International Conference on Machine Learning. PMLR, 2019. |
ICLR | Title
Efficient Certification for Probabilistic Robustness
Abstract
Recent developments on the robustness of neural networks have primarily emphasized the notion of worst-case adversarial robustness in both verification and robust training. However, often looser constraints are needed and some margin of error is allowed. We instead consider the task of probabilistic robustness, which assumes the input follows a known probabilistic distribution and seeks to bound the probability of a given network failing against the input. We focus on developing an efficient robustness verification algorithm by extending a bound-propagation-based approach. Our proposed algorithm improves upon the robustness certificate of this algorithm by up to 8× while with no additional computational cost. In addition, we perform a case study on incorporating the probabilistic robustness verification during training for the first time.
1 Introduction
Neural networks have found great success in a wide variety of applications. For many of these applications, understanding when and how a neural network can fail is crucial. Szegedy et al. (2014) found that almost visually imperceptible perturbations of input images could drastically change the output of a neural network. This realization set off a large area of research, both in finding ways of attacking neural networks and developing defense and certification methods against said attacks. The most common setting which is considered is the worst-case robustness given an lp norm. There have been a number of adversarial robustness certification methods which operate by finding provable lp balls for which the output of the neural network is guaranteed to be constant. In other words, they find lower bounds on the minimum lp radius for which an adversarial attack exists. This is done by relaxing the network for a bounded domain. Convex relaxations are typically used (Salman et al., 2019), although some works also consider quadratic program formulations (Raghunathan et al., 2018). Note that these are input-specific, so they do not give guarantees about the entire input domain.
As opposed to the worst-case adversarial robustness, there is also great interest in the threat model where the primary concern is natural noise and corruption rather than a malicious adversary and thus the perturbations follow some probability distribution. Typical neural networks have been found to be vulnerable to common corruptions (Dodge & Karam, 2016; Geirhos et al., 2018). We distinguish between these two notions of robustness as adversarial robustness and probabilistic robustness. We note that the terms such as corruption robustness, probability of violation, and adversarial density have also been used to refer to the general concept of probabilistic robustness. For the threat model of natural noise and corruption, there has not been as much work developed as for worst-case adversarial robustness. One of the very first probabilistic verification algorithms without imposing assumptions on the decision boundary is known as the PROVEN algorithm (Weng et al., 2019). In PROVEN, the authors derive probabilistic verification bounds based on existing worst-case robustness verification such as Fast-Lin and CROWN (Weng et al., 2018; Zhang et al., 2018).
Contributions. In this work, we generalize the PROVEN algorithm proposed by Weng et al. (2019) to infinite support and greatly improve the tightness of the robustness certificate without additional computation cost. We name our algorithm I-PROVEN, as the proposed
algorithm is a significantly improved version of PROVEN algorithm. The I-PROVEN algorithm can achieve significant improvements (2×-8× tighter) on the tightness of probabilistic robustness certificate for both MNIST and CIFAR-10 models without additional computation cost, and it also enables the certification of probability distributions with infinite support. Based on our proposed algorithm, we conduct a case study on augmenting an existing training pipeline with probabilistic robustness verification bounds, and we find mixed results for the training. We examine potential causes and implications.
2 Background and related works
Notation. In order to describe related work in depth, we will lay out some notation. We define a K-layer feed-forward ReLU neural network with n0 inputs and nK outputs f : Rn0 → RnK as
f(x) = f (K)(x)
f (i+1)(x) =W (i+1)σ(f (i)(x)) + b(i+1)
f (1)(x) =W (1)x+ b(1)
In other words, f (i)(x) denotes the vector of pre-activation values in the ith layer. Generally, we work in the setting of image classifiers where a class c is classified over a class i if fc(x) − fi(x) > 0. To simplify notation, we assume that the neural networks f which we are working with have already had this margin function applied to it for some given c, i. In other words, we assume nK = 1 for convenience and we are interested in when f(x) > 0.
2.1 Adversarial robustness verification
Adversarial robustness verification asks, given a neural network f and a region R( ) in the input space, does there exist an x ∈ R( ) such that f(x) ≤ 0? To solve this problem, we can formulate it as an equivalent optimization problem: minx∈R( ) f(x). If no such x such that f(x) ≤ 0 exists, or equivalently, if the minimum f(x) is positive, then f is robust for region R( ). If we can prove that f is robust on regions R( ) for all ≤ , then the robustness certificate is . The robustness certificate is a lower bound for the true minimum distortion ∗.
The regions R( ) of general interest are Lp balls Bp(x0, ) for a given image or input x0. This arises from the interpretation that an adversary is perturbing x0 by at most under a given Lp norm. Note that certification only informs robustness about a single image. As far as we know, it is infeasible to certify an entire dataset other than processing it image by image.
Convex relaxation for provable verification. These methods find a convex relaxation of a neural network in order to find provable certifications for its adversarial robustness. We will discuss these methods in detail as our method builds on them in certain ways. There are a number of works following these methods (Weng et al., 2018; Singh et al., 2018; Zhang et al., 2018; Singh et al., 2019b), and a general framework for them is described in (Salman et al., 2019). We will use the setup used in CROWN (Zhang et al., 2018). In these methods, inequalities on pre-activation neurons are recursively computed
l(j) ≤ A(j)L x+ b (j) L ≤ f (j)(x) ≤ A(j)U x+ b (j) U ≤ u (j)
for each layer j. Note that these are element-wise bounds, l(j) and u(j) are scalar vectors, and A
(j) L , b (j) L , A (j) U , b (j) U are linear transformations of inputs x. These linear bounds are obtained by relaxing the non-linear activation functions to linear lower and upper bounds given that the inputs to the activation functions are within some interval found from the inequalities applied to earlier layers, l(i), u(i), i < j. These inequalities are propagated backwards through the network until the original input is reached. Under the typical lp ball threat model, Hölder’s inequality can give scalar bounds on these layers and the process can continue to the final outputs. (Singh et al., 2019a; Tjandraatmadja et al., 2020) have made progress on improving these bounds beyond the convex relaxation gap pointed out by Salman et al. (2019) by considering the activation functions on multiple neurons jointly.
2.2 Probabilistic robustness
For probabilistic robustness, we are considering a known probability distribution D : Rn → [0, 1] which the inputs x are sampled from. We will focus on additive iid uniform noise which we denote, in an abuse of notation, as B∞(x, ). In other words, B∞(x, ) is the distribution generated by sampling points evenly from the hyperrectangle [x1 − , x1 + ]× [x2 − , x2 + ]× · · · × [xn − , xn + ]. Then the problem of probabilistic robustness verification is to verify that
Prx∼D [f(x) > 0] ≥ 1−Q (1)
for some given failure probability Q.
We can define the robustness certificate similarly to how it was done for adversarial robustness. Weng et al. (2019) and Anderson & Sojoudi (2020) particularly consider the maximum parameterizing D for which the above holds and we also provide such results. This is found by binary searching over , as empirically we find that the robustness is monotonic in for the distributions we consider, but we note that there are no theoretical guarantees for this.
Sampling methods. Sampling gives well-established statistical guarantees which can be applied to the problem of probabilistic robustness. By using the neural network essentially as a black-box, Chernoff bounds can estimate the probability of the neural network giving the incorrect classification given a distribution. This has the advantage of making no assumptions on the model or the distribution, but requires a large number of samples to achieve a high degree of accuracy and there is an inherent uncertainty present in the application of such an algorithm. Baluta et al. (2021) notes for example, that proving that the probability is between [0.1− 0.5× 10−4, 0.1 + 0.5× 10−4] with a confidence of 0.9 would require 5.5× 106 samples. To overcome this, they propose a framework which reduces the number of samples necessary, although this is dependent on the true probability. Anderson & Sojoudi (2020) also provide a method that can find upper bounds on the probability that a model is incorrect with a small number of samples. Webb et al. (2018) uses a clever sampling method that leverages the layered structure of common architectures. They require upwards of 107 samples but are able to obtain precise estimations. Though they are unable to provide theoretical guarantees, they show that empirically, their estimations agree with naive Monte Carlo estimates with as many as 1010 samples.
2.3 Training adversarially-robust models
Training methods for improving adversarial robustness have generally taken two paths. The first augments the data with adversarial attacks in order to strengthen a model’s resistance to such attacks (Madry et al., 2017). The second approach adds loss regularization terms that help the model learn robust features of the data. Xiao et al. (2018) identifies weight sparsity and ReLU stability as important factors in a model’s adversarial robustness and builds a training framework which incorporates these. Other works use certification methods as regularization terms in order to improve the certifiable robustness of a model. Interval bound propagation (IBP) has found great success in this despite being a relatively loose certification method (Gowal et al., 2019). In particular, the efficiency of IBP has made it amenable to training. A number of other works have made progress in closing this efficiency gap (Zhang et al., 2019; Xu et al., 2020; Shi et al., 2021; Boopathy et al., 2021).
3 Our main results
In section 3.1, we illustrate the idea of deriving tighter probabilistic robustness certificate and provide the details of our I-PROVEN algorithm. We remark on alternative setups in section 3.2. In section 3.3, we conduct a case study on including our proposed probabilistic verification bounds into standard training. All experimental results are reported in section 4.
3.1 I-PROVEN: Improving the tightness of PROVEN algorithm
In this subsection, we will show how we could build on top of the state-of-the-art PROVEN algorithm (Weng et al., 2019) to derive tighter probabilistic robustness certificate in IPROVEN. Using convex relaxation methods, one can find linear bounds with respect to the input such that,
A (K) L x+ b (K) L ≤ f(x) ≤ A (K) U x+ b (K) U , ∀x ∈ supp(D). (2)
PROVEN argues that by applying probabilistic bounds
Pr x∼D
[A (K) L x+ b (K) L ≥ 0] ≥ 1− q, (3)
we can conclude that f(x) ≥ 0 with probability at least 1 − q. Notably, probabilistic inequalities are only considered in the final layer. Thus, they are able to give a probabilistic robustness certification.
To extend this method, we must look into how these linear bounds AL, bL, AU , bU were obtained. This is done recursively: every layer’s pre-activation neurons are bounded by linear bounds with respect to the input. Then, scalar bounds are obtained using Hölder’s inequality on the linear bounds and the support of the input. These scalar bounds define intervals in the domain of each activation function for the next layer. This allows linear bounds to be calculated over the activation function and for the next layer to continue propagating linear bounds.
The key observation is that these linear bounds in previous layers can also apply with some probability rather than strictly and that the failure probabilities accumulate linearly. Assume that we have some q′ for which
Pr x∼D
[f(x) ≥ A(K)L x+ b (K) L ] ≥ 1− q ′. (4)
Then a simple union bound gives that the probability of f(x) ≥ 0 is at least 1− q − q′. This is a simple scenario which can be extended to involve all layers of the network. Theorem 1. In the convex-relaxation framework, for each scalar inequality i (l ≤ ALx+ bL or u ≥ AUx+ bU ), denote qi as the probability that this inequality is violated with respect to the probability distribution D which x is sampled from. Then the probability of the final output of the convex-relaxation algorithm holding for a given x ∼ D is ≥ 1− ∑ i qi.
Proof. The convex-relaxation framework operates by making a series of m inequalities which ultimately lead to the output layer. We can label these L1, L2, . . . , Lm. When we apply probabilistic bounds, these inequalities may not be guaranteed. We will denote Ej to be the event that Lj is correct. Even though there is the chance of failure, we will operate assuming that all inequalities were correct. If they are indeed all correct for some x, then we can conclude that f(x) ≥ l(K) for this particular x as expected. Then it suffices to find a lower bound on the probability of the intersection of the Ej ’s. We have
Pr x∼D
[ f(x) ≥ l(K) ] ≥ Pr x∼D ⋂ j Ej = 1− Pr x∼D ⋃ j Ej ≥ 1−∑ j Pr x∼D [ Ej ] = 1− ∑ j qj (5)
as desired.
Now we will take advantage of this theorem. Assign every scalar inequality i (either ALx+ bL ≥ l or AUx+ bU ≤ u) a failure probability qi which sum to Q altogether. We can invert the probabilistic inequalities to find scalars l and u such that these hold with given failure probability qi. In particular, say that functions γL(a, b) and γU (a, b) are such that
γL(a, b) ≤ Pr x∼D [ax+ b > 0] ≤ γU (a, b)
for any a, b. We want to choose l such that Pr[ALx+ bL < l] ≤ qi. Thus, it suffices to solve for l such that γL(AL, bL − l) = 1− qi.
Algorithm 1 I-PROVEN for B∞(x, ) 1: procedure PrIneq(x, , AL, bL, AU , bU , q) 2: lstrict → ALx+ bL − ||AL||1 3: ustrict → AUx+ bU + ||AU ||1 4: lprob → ALx+ bL − √ 2 ln 1q ||AL||2
5: uprob → AUx+ bU + √ 2 ln 1q ||AU ||2
6: return max(lstrict, lprob),min(ustrict, uprob) 7: end procedure 8: procedure IPROVEN(x, , f , Q) 9: q → Q/(2× number of neurons)
10: l(1), u(1) → PrIneq(x, ,W1, b1,W1, b1, q) 11: for i in [2, K] do 12: A(i)L , b (i) L , A (i) U , b (i) U → GetLinearBounds(l(:i), u(:i), A (:i) L , b (:i) L , A (:i) U , b (:i) U ) 13: l(i), u(i) → PrIneq(x, , A(i)L , b (i) L , A (i) U , b (i) U , q) 14: end for 15: return l(n) 16: end procedure
For the uniform distribution B∞(x, ), such functions γL and γU can be derived from Hoeffding’s Inequality as seen in Corollary 3.2 of (Weng et al., 2019). Inverting them, we obtain
l = ALx+ bL − √ 2 ln 1
qi ||AL||2. (6)
Performing the forward and backward bound propagation methods ultimately yields a final scalar lower bound l(n). By theorem 1, we have that
Pr x∼D [f(x) ≥ l(n)] ≥ 1− ∑ i qi = 1−Q.
We choose a simple scheme in assigning the failure probabilities. We select some subset of layers S and assign all inequalities pertaining to these layers equal qi’s. All other qi’s are set to 0. So
qi =
{ Q
2× number of neurons within layers of S if i ∈ S 0 if i 6∈ S
(7)
qi = 0 indicates using the strict bounds and is only possible when D has bounded support, as with B∞(x, ). In this case, the Hölder inequality for the l∞ norm is used. Note that the original PROVEN algorithm is equivalent to only selecting the last layer.
We found that more complex assignment strategies such as optimizing over q with sum equal to 1 did not lend themselves to any significant improvements compared to this simple method, especially given their additional run-time cost. However, there are a few factors to keep in mind when choosing the subset of layers S. In the case of uniform bounded noise, if the non-zero qi are small, it is possible for the strict inequalities, which find
lstrict = ALx+ bL − ||AL||1, ustrict = AUx+ bU + ||AU ||1 (8)
to give tighter intervals than the probabilistic inequalities.
The different weight norms also create some discrepancy in the effectiveness of the probabilistic bounds. In particular, although the matrices A in the above equations are not directly the network weights, we found that I-PROVEN performed worse on models with sparse weights. To take these into account, we return the tightest intervals using either the strict or probabilistic bounds, although we do not update the q’s to incorporate our choice.
3.2 Other distributions and certifiers
I-PROVEN can support distributions with infinite support. The only requirement which I-PROVEN has on the distribution D is that we must have lower and upper bounds on Prx∼D[ax+ b > 0] for a, b ∈ R. This may include distributions with infinite support. For example, for additive iid Gaussian noise with standard deviation , we can obtain
l = ALx+ bL − erf−1(1− 2qi) ||AL||2 (9) from basic facts about Gaussian distributions. Similar formulas can be found for Gaussian mixtures when such a distribution is known and relevant. Note that qi must be non-zero for all inequalities when dealing with distributions with infinite support, as strict alternatives no longer exist.
I-PROVEN can be used with any linear-relaxation-based certifier with no additional computational cost. As I-PROVEN only requires changing the evaluation of the scalar bounds, it has no additional time complexity cost to whatever method it is being used with. It can be incorporated in any such linear-relaxation method. For a network with K layers and n neurons at each layer, the entire method when used with CROWN is O(n3K2) (Zhang et al., 2018), an additional factor of K compared to a pass of the network.
3.3 Probabilistic verification based training
Our training method is simply a substitution of PROVEN into CROWN-IBP (Zhang et al., 2019). It requires two forward passes. The first computes strict bounds on each pre-activation neuron using IBP. The second performs linear bound propagations to compute linear bounds on the output. Then probabilistic bounds are applied to obtain the final scalar lower and upper bounds which are used in the loss function as described in CROWN-IBP’s original paper. Note that this effectively means that only the original PROVEN is applied in this training method.
4 Experiments
4.1 Implementation details
We performed experiments on the MNIST and CIFAR-10 datasets (LeCun et al., 2010; Krizhevsky, 2009). For our verification experiments, we used pretrained models provided in (Weng et al., 2018) and (Zhang et al., 2018) and the images from each dataset were scaled so that their values were in [−0.5, 0.5]. For our training experiments, we used the architectures described in (Gowal et al., 2019) and the images in the dataset were scaled to be in [0, 1]. Our MNIST models were trained for 100 epochs with a batch size of 100 and with a warm-up period for the first 5 epochs and a ramp-up period for the next 45. We followed the β and κ schedule used in (Zhang et al., 2019) for the PROVEN-IBP loss and we increase our from 0 to 4× 10−1 during the ramp-up period. We evaluate the validation set using = 3× 10−1. All code was written in Python with the use of the PyTorch library (Paszke et al., 2019) and training was conducted on a NVIDIA Tesla V100 GPU.
4.2 Verification results
We compare the original PROVEN bounds with our improved PROVEN on various MNIST and CIFAR-10 classifier in table 1. The classifier are k-layer MLPs with n neurons in each layer, denoted as k × [n]. In I-PROVEN, qi are all non-zero and equal across all layers. It achieves much better performance than the original PROVEN (Weng et al., 2019) with a 600% increase for 99.99% probabilistic robustness. Note that Q = 0 is simply the adversarial robustness certificate.
I-PROVEN’s improvement over PROVEN can be observed from intermediate layers’ bounds. In table 2, the average interval gap of the neurons in a layer is tabulated. The model in particular is a 4-layer MNIST MLP (layer 0 indicates the input, which is why the interval length is simply 2 ). PROVEN and I-PROVEN are each evaluated on 10 images for various
,Q. The scalar intervals l, u which give bounds on each neuron in both methods are averaged within each layer. The smaller the interval size (that is, the tighter the interval), the better. I-PROVEN’s intervals are noticeably tighter from the first layer onwards as it invests a portion of the total failure probability Q to tighten these earlier inequalities. This does mean PROVEN’s final inequalities are sharper than I-PROVEN’s, which we can notice by observing the ratio of interval widths between Layer 3 and Layer 4. For = 0.001, Q = 0.1 for example, PROVEN goes from 0.045 to 0.564, a 12.5× increase, while I-PROVEN goes from 0.09 to 0.203, a 22.5× increase. However, the improvements I-PROVEN makes in the earlier layers means it still ends up with tighter final bounds than PROVEN.
Note that we can our method is also able to handle additive Gaussian noise with only small changes to our probabilistic inequalities. Notably, we do not need to truncate the Gaussian distribution as (Weng et al., 2019) did. We plot our results for both additive uniform noise and additive Gaussian noise in fig. 1.
4.3 Comparison to other methods
In the earlier section, we compared I-PROVEN to PROVEN in terms of their robustness certificates. Now, we will show comparisons to IBP and a simple Monte Carlo approach based on (Anderson & Sojoudi, 2020). IBP does not generally perform well for arbitrary networks, so we use IBP-trained models in our experiment. In particular, we trained three CIFAR-10 CNN models A, B, and C. Model A was trained with standard loss, Model B was trained with an IBP loss term with ramping up to 2.2/255, and Model C was trained with an IBP loss term with ramping up to 8.8/255. We use two different , 2/255 and 8/255, for each model and certifier. Q is fixed at 1%. We apply these certifiers on 1000 images from the validation dataset.
For the Monte Carlo approach, we take T ln 1/u samples from the uniform B∞(x, ) distribution where u = 0.0001 and T = 100, rounding to 921 samples per image x. Then if every sample is correctly classified, we conclude that at least 1− 1/T = 99% of the distribution is correctly classified with false positive rate less than u. This Monte Carlo approach has a higher false negative rate than other sampling approaches, but we chose this one beecause it requires the lowest number of samples as far as we are aware.
As the results in table 3 show, I-PROVEN performs better than IBP across all models, but this gap diminishes greatly in the two IBP-trained models, particularly for Model C. Similarly, we see the gap between PROVEN and I-PROVEN close and PROVEN even outperforms I-PROVEN in Model C. We see a similar phenomena in the next section, section 4.4, and provide an explanation.
Unsurprisingly, the Monte Carlo approach obtains better results than any of the relaxationbased approaches. Furthermore, in terms of timing, PROVEN/I-PROVEN took 128-131s for all 1000 images per model, while the Monte Carlo approach consistently took around 4s. Evaluating the standard error and IBP error took under a second. However, the exact details on timing do depend somewhat on the situation. In this experiment, I-PROVEN could not be batched at all as the memory used was too expensive, while Monte Carlo methods can easily be batched.
4.4 A case study on training with I-PROVEN
We examine I-PROVEN’s verification compared with IBP’s on models trained with IBP and PROVEN-IBP, respectively, on the small CNN from (Gowal et al., 2019) and the verification algorithms are considering B∞(x, 0.3). Note that IBP is considering the model’s adversarial robustness while I-PROVEN is considering the probabilistic robustness for Q = 1× 10−2 (equivalently, for 99% of the ball). We found that I-PROVEN does not obtain significantly better results than IBP in either case, mirroring our results on the IBP-trained models in table 3. We conjecture that this is due to the weight sparsity induced by IBP’s involvement in the training for both pure IBP training and PROVEN-IBP, and that this sparsity is also present in the linear bounds for I-PROVEN. Weight sparsity is beneficial for adversarial robustness, particularly for the l∞ norm (Xiao et al., 2018). However, as far as we are aware, there is no reason to expect weight sparsity to help a model’s probabilistic robustness and as noted in section 3.1, I-PROVEN’s probabilistic inequalities prefer more evenly distributed matrices.
This weight sparsity also explains why PROVEN outperforms I-PROVEN in Model C of table 3. When the model weights are sparse, the strict inequality is tighter than the probabilistic inequalities for very small qi and so I-PROVEN does not perform as well as usual by distributing Q evenly among the inequalities.
To test our hypothesis, we compared the last layer’s linear lower bounds AL between a CNN trained in a standard manner, and a CNN trained with an IBP loss as in (Gowal et al., 2019). We show this for an image from the validation set in fig. 2. These AL’s came from I-PROVEN applied with = 0.3, Q = 1× 10−2, and the failure probabilities in the last two layers non-zero and equal. Each 28× 28 grid corresponds to a logit in the output. The grid for 3 is all 0 as this is specifically considering the margin function between each class with the true class, 3. The absolute values of the matrix are scaled to fit in [0, 1]. Evidently, the values from the IBP model are far more sparse. Further examples are included in the appendix.
5 Conclusion
In this paper, we present I-PROVEN, an algorithm that can efficiently verify the probabilistic robustness of a neural network. We show strong improvements compared to the prior method used against this problem: we remove the assumptions of bounded support and significantly improve the tightness of robustness certificate without any additional cost. Furthermore, we present a training framework for probabilistic robustness and demonstrate its shortcomings. By taking a closer look at these results, we make steps towards understanding the relation between adversarial certified defense methods and our own. | 1. What is the focus and significance of the main result (Theorem 1) presented in the paper?
2. Is the contribution of the paper sufficient to meet the standards of ICLR?
3. Are there any concerns or suggestions regarding the technical content and its presentation in the paper? | Summary Of The Paper
Review | Summary Of The Paper
see below
Review
The main result, Theorem 1, seems rather trivial. While it is good to highlight that one can consider a union over all layers, the contribution may not be significant enough for ICLR.
updates after author response
I am still feeling that, while the observation of union bound is cute, the overall technical contribution is below the bar (the union bound is the only new analysis). However I do encourage authors to further explore the power of the bound, likely among other things, to get a stronger algorithm. |
ICLR | Title
A stepped sampling method for video detection using LSTM
Abstract
Artificial neural networks are considered to simulate the human neural networks, and achieves great progress on object detection, natural language processing (NLP), image generation, etc. Hermann Ebbinghaus proposed the law of human memory and how to improve human learning in 1885. Inspiring from Ebbinghaus’ work, we propose a stepped sampler based on the “repeated input”, which is Ebbinghaus’ contribution that how to strengthen the learning. We repeatedly inputted data to the LSTM model stepwise in a batch. The stepped sampler is used to strengthen the ability of fusing the temporal information in LSTM. We tested the stepped sampler on the LSTM offered by PyTorch. Compared with the traditional sampler of PyTorch, such as sequential sampler, batch sampler, the training loss of the proposed stepped sampler converges faster in the model training, and the training loss after convergence is more stable, which means that there is no large jitter after the convergence. Meanwhile, it can maintain a higher test accuracy, compared with the traditional sampler. We quantified the algorithm of the stepped sampler. We assume that, the artificial neural networks may have human-like characteristics, and human learning method could be used for machine learning. Our code will be available online soon.
N/A
Artificial neural networks are considered to simulate the human neural networks, and achieves great progress on object detection, natural language processing (NLP), image generation, etc. Hermann Ebbinghaus proposed the law of human memory and how to improve human learning in 1885. Inspiring from Ebbinghaus’ work, we propose a stepped sampler based on the “repeated input”, which is Ebbinghaus’ contribution that how to strengthen the learning. We repeatedly inputted data to the LSTM model stepwise in a batch. The stepped sampler is used to strengthen the ability of fusing the temporal information in LSTM. We tested the stepped sampler on the LSTM offered by PyTorch. Compared with the traditional sampler of PyTorch, such as sequential sampler, batch sampler, the training loss of the proposed stepped sampler converges faster in the model training, and the training loss after convergence is more stable, which means that there is no large jitter after the convergence. Meanwhile, it can maintain a higher test accuracy, compared with the traditional sampler. We quantified the algorithm of the stepped sampler. We assume that, the artificial neural networks may have human-like characteristics, and human learning method could be used for machine learning. Our code will be available online soon.
1 INTRODUCTION
The emergence of convolutional neural networks (CNN) (LeCun et al., 1989) has improved the selflearning ability of artificial neural networks. Recurrent Neural Network (RNN) (Mikolov et al., 2010) is used to process the temporal information data. RNN takes the output of the previous time period as the input of the next time period, effectively using the temporal information of the input sequence.
RNN sometimes may have the problem of gradient disappearance or gradient explosion. Hochreiter et al. (Hochreiter & Schmidhuber, 1997) proposed LSTM. LSTM adds gates to RNN, thus it can effectively avoid the problem of gradient disappearance or explosion. These gates include the forgetting gates, the input gates, and the output gates. The forgetting gate seems to be the most important among them. LSTM may simulate the memory process of human brain. Human brain selectively forgets some information for learning better.
Consider that one of the principles of neural networks may be learned from biological neural networks, for those artificial neural networks with the memory effects, such as LSTM, learning from the memory method of human, which is the repeated input and timely review, we study the effect of this method with repeated input on LSTM detection results, without considering changing the LSTM network structure.
In this study, we learn the effect of the proposed input method on neural networks with memory characteristics, such as LSTM. Specifically, it is to repeatedly input training data by simulating the “repeated input and timely review” method of the human memory, and the “repeated input and timely review” method is proposed by Hermann Ebbinghaus (Ebbinghaus, 1913) in 1885, which is the “Increasing Memory Strength and Rate of Learning” in his literature.
1.1 OUR CONTRIBUTION
Our views in this paper mainly include the following 3 aspects:
a) A novel sampler is proposed, which implements sampling in a circular and stepwise manner. Compared with the traditional sampler, the loss curve of the LSTM model using this stepped sampler converges faster in training, and is more stable after the convergence, namely there is no large jitter after the convergence. Moreover, its test accuracy curve is more stable either, which has no jitter. When the batch size is 15, the test accuracy of the stepped sampler LSTM is much higher than that of the traditional sampler with the same parameters.
b) The idea of this sampler comes from the laws of human memory, which was proposed by Ebbinghaus (Ebbinghaus, 1913). We courageously assume that, other human learning methods can also be applied to machine learning. One example is the proposal of the attention mechanism (Vaswani et al., 2017). Moreover, we believe that artificial neural networks have human-like characteristics from the experimental performance.
c) We try to use mathematical language to describe the temporal information of the video frames. We try to apply the mathematical equations to our experimental results, and analyze that the test accuracy in the experiment is the temporal information between video frames. The derivation process is shown in Appendix A and Appendix B.
2 RELATED WORK
Gibbs sampling is one of the earlier data sampling algorithms, which is proposed by Geman et al. (Geman & Geman, 1984) in 1984. Gibbs sampling is to make the probability of the data sample approximately equal to the required probability distribution via iterations. Gibbs sampling randomly selects data from an initial input sequence, and iterates according to the specified conditional probabilities, which are related to the required probability distribution of the final sampling data. After iterations, Gibbs sampling generates data which is consistent with the required probability distribution. Hu et al. (Hu et al., 2018) used neural networks to generate a sampler, which transfer the initial data distribution to the target distribution. The method can generate the sampling data at the same time of training. This method works with the un-normalized probability density function. Wang et al. (Wang et al., 2018) used Generative Adversarial Nets (GAN) (Goodfellow et al., 2014) to generate the negative samples. The approach is the first to combine GAN with the negative sampling method, which improves the training effect of the streaming recommend system. Chu et al. (Chu et al., 2019) proposed a novel sampler that can sample both the positive and the negative data from the input data sequences, so as to let the classifier utilize the Regions of Interests and the background of the data. The sampler is used in the few-shot image classifier, which uses the reinforcement learning method. The reinforcement learning algorithm (Kaelbling et al., 1996) needs to continuously select the regions of interests from the images, subsequently to recognize the content of the Regions of Interests. Sampling these Regions of Interests can improve the efficiency of reinforcement learning, for the reason of the reduction of the input samples. Muhammad et al. (Muhammad et al., 2021) proposed a bi-directional long short-term memory (BiLSTM) with attention mechanism and a dilated convolutional neural network (DCNN) to perform action recognition, which outperformed the state-of-the-art methods. Kwon et al. (Kwon et al., 2021) proposed a spatio-temporal neighbourhood learning method on action recognition, which performed the state-of-the-art.
3 MATERIALS AND METHODS
This paper is from the perspective of data input, rather than the neural network structure, and study the impact of the memory effect on the temporal sequence neural networks (such as LSTM). The process simulates the method of enhancing the memory process of human brain, repeats the input data in a stepped way. The method is proposed by Hermann Ebbinghaus (Ebbinghaus, 1913) called “Increasing rate of learning” in his book. The specific mode we used was the wheel tactic (Smith, 1994) when we recited words, by establishing a novel data sampler in the LSTM model training. The dataset in the experiment is UCF101 (Soomro et al., 2012), which is a human action recognition video dataset. The name of each folder indicates the annotation of the video.
3.1 EBBINGHAUS FORGETTING CURVE
Ebbinghaus forgetting curve (Ebbinghaus, 1913) describes the memory effect of human brain over time, which was proposed by Hermann Ebbinghaus in 1885. This theory reveals the human memory law. It is also the law of human learning. That is, the loss of human memory when learning new knowledge is drop fast first and slow later. Ebbinghaus also pointed out that, timely review and repeated input are the key point to prevent forgetting, consolidate knowledge, and learn better. Figure 1 illustrates Ebbinghaus forgetting curve, and timely review can reduce the knowledge forgetting, which makes the learning better. Based on Ebbinghaus forgetting curve on the human brain, we simulated Ebbinghaus’ method on machine learning. We believe that the experimental results in Section 4 could prove that there is a certain correlation between human learning and machine learning, since the machine learning method with timely review and spaced repeat has a faster learning effect, compared with the machine learning without the human-like method.
Ebbinghaus also found that, making good use of the correlations between knowledge is another key point for enhancing learning. We definite these correlations are temporal information in Appendix A. Thereby enhancing the use of temporal information is the key to video detection, natural language processing (NLP), etc. We believe that, the partly repeated input of the stepped sampler enhances the correlation and the temporal information.
3.2 LSTM
The LSTM architecture we used in this paper is to start with a CNN backbone. The CNN backbone has four convolutional layers. The dimension of convolution kernels of each convolutional layer is 32, 64, 128, 256. The size of each convolution kernel is 5 × 5, 3 × 3, 3 × 3, 3×3, the stride of the convolution kernel is 2, and the padding is 0. Each convolutional layer is followed by a batch normalization (BN) (Ioffe & Szegedy, 2015) layer and a ReLU layer. The last part of the CNN model is 3 fully connected (FC) layers, which use dropout function. The dimensions of the 3 FC layers are 1024, 768 and 512 respective-
ly. The LSTM used in the paper is the model existed in PyTorch. The input dimension of LSTM is 512, the hidden layer dimension is 512, and the number of hidden layers is 3. The next is two fully connected (FC) layers followed by dropout function. The dimension of the FC layers is 256. The dropout rate of the CNN backbone and LSTM are both 0.3.
3.3 THE STEPPED SAMPLER
Our experiment is compared with the common sampler. Common sampler in PyTorch (Paszke et al., 2019) include random sampler, weighted random sampler, batch sampler, etc. Batch sampling is nearly the most commonly used. The previous research is to add memory units to deep learning networks, such as RNN, LSTM, etc. Analogous to human learning, an important point is repetition. And the sampler should be an appropriate way to simulate the “repetition”, since the data in each batch can be designed to be input repeatedly. We suppose that, “repetition” is important, not only for human beings, but also for computers. To make computers better use the “repetition”, analogizing the way how we recite words, we propose a “stepped” repetition input method, which is the stepped sampler.
The structure of the proposed stepped sampler is illustrated in Figure 2. It is established on the batch sampler. The stepped sampler divides a batch into some sub-batches. Like human memory, this sampler adopts the principle of adjacent repetition (Crowder, 1968), namely, the back of the previous sub-batch is the same with the front of the next sub-batch.
The structure of the stepped sampler shows that, the input data of different batches is partly duplicated. The repeated input seems to increase the redundancy, but the experimental results show that, with our experimental environment, this method can accelerate the convergence of LSTM model. There is a stride between the previous sub-batch and the next sub-batch. The stride size n can be set manually. We believe that this part repetition enhances the correlation of the input frames, thereby enhancing the temporal information of the input frames, according to our definition of the temporal information in Appendix A. Section 4 describes the comparative experiments on the sampler with different stride size.
3.4 THE ALGORITHM OF THE STEPPED SAMPLER
The stepped sampler is designed on the basis of batch sampler. The algorithm is designed to implement stepped sampling within each batch via the batch sampler. The workflow of the algorithm is as follows: the data first goes through the sequential sampler of PyTorch, then, they are processed to batches by the batch sampler. Finally, the data in each batch are divided into sub-batches with the same strides by the stepped sampler.
As shown in Figure 2, assuming that the iteration number of the stepped sampler in a batch is d, it can be concluded from the figure:
L = m+ n× d (1) It can be deduced that, the stepped sampler iteration number per batch, d is:
d = L−m
n (2)
Equation 2 is used as the iteration number within a batch in Algorithm 1. d is computed by the algorithm when L, m, n is determined. If d is not an integer, PyTorch will round down to ensure that d is an integer. The number of batches is calculated by the framework, and the number of epochs is set manually. The algorithm of the proposed sampler is shown in Algorithm 1. The idea is to implement the stepped sampler in each batch after the sequential sampler and batch sampler of PyTorch. Line 12 of Algorithm 1 is that, after each previous stepped sub-batchs output, the starting coordinate is moved by n (step stride) data from the starting position of the previous sub-batch.
4 RESULTS
4.1 EXPERIMENT SETUP
The system used in the experiment was a workstation with 32 GB CPU RAM and a NVIDIA GeForce 1080ti GPU. The processor was Intel i7 8700, the operating system was Ubuntu 16.04 64 bits. PyTorch version used in the experiment was 1.0.1, Python version was 3.6, Numpy version was 1.20.4, Sklearn version was 0.20.4, and Matplotlib, Pandas, tqdm were implemented as the software environment. The reason why we chose old version PyTorch is, the performance of the old version may not be so powelful, however, the experimental effect may be better, since the contrast of the results may be larger. And since the old version may not have too many functions, it can focus on the factor of “repeat input”, without the interferences from other irrelevant factors.
We searched the relevant literature, and found that there may be no LSTM literature that applies human learning methods to machine learning for now. Therefore, the experiment we designed is a comparison experiment, which is an ordinary CNN-LSTM, with or without the stepped sampler, and the other parameters are all the same. One of the advantages of this is that, it can reduce the influence of other irrelevant factors, and can specifically concentrate on the machine learning results of human learning methods. And the human learning methods is the timely review proposed by Ebbinghaus.
4.2 TRAINING
The detection accuracy evaluation and cross-entropy loss were used for the training of the models. The accuracy evaluation in the experiment used the accuracy score tool in the Sklearn package of Python. The cross-entropy loss used the function of PyTorch. The accuracy and loss were graphically depicted in Figure 4, 5 and 6. The accuracy and loss were computed every epoch. The dataset of
UCF101 was split into training set and test set by a ratio of 3:1. After training, an overall accuracy and loss were computed by the test set, to evaluate the performance of the models. The epoch was set to 150. We used Adam as the optimization algorithm. We experimented different batch sizes and step sizes, which was changing the size of L, m and the step stride n shown in Figure 2.
Our experiment is trained from scratch. Training from scratch may decrease the test accuracy, but it can eliminate interference and focus on the stepped sampler. The learning rate was set to 0.0001. The momentum was set to 0.01. The operators of batch normalization (BN) and ReLU activation were used after each convolutional layer in the CNN backbone. The CNN backbone is not shown in Figure 2. The data transformation was applied to enhance the network. The input frames were transformed into 256× 342 pixels.
4.3 EXPERIMENT RESULTS
Figure 3 are the visualized results.We tested the sampler of batch size 25. Figure 4 presents the experimental results. Each subfigure shows the training loss and test accuracy of the models. The difference of the models is only the sampler, for comparing the results only caused by the sampler. Figure 4 (a) is the model of traditional sampler, which is the sequential sampler and the batch sampler in PyTorch, and the other in Figure 4 are the models of the proposed stepped sampler. Figure 4 (b), (c), (d), (e), (f) are only different from the step stride for comparing. From Figure 4, we consider that step stride 2 (batch size 25, step size 20, in Figure 4 (c)) is the optimal. The training loss in Figure 4 (a) has many jitters, even when the epoch is more than 110, while the training loss in the other subfigures are much smoother, and can converge earlier than the traditional batch sampler model (Figure 4 (a)). Nonetheless, the test accuracy score of the traditional batch sampler model (Figure 4 (a)) is slightly higher. The test accuracy of Figure 4 (a) can be 0.656, while the test accuracy of the model with stride 2 stepped sampler (Figure 4 (c)) can be 0.603. The test accuracy of the models in Figure 4 is shown in Table 1.
From Figure 4, the following could be concluded: a) In the model training, LSTM with the stepped sampler converges faster than LSTM with the traditional sampler, and the convergence effect is better, i.e., there is no large jitter after the drop. b) When the batch size and the step size are fixed, the smaller the step stride was, the worse the detection effect became. Similarly, the larger the step stride was, the worse the detection effect became either. If the batch size and the step size are fixed, the detection effect seems to be a normal distribution of the step stride. c) However, LSTM with the traditional sampler whose batch size is 25 has a higher test accuracy on the test set. Although this value is not much higher than the optimal stepped sampler model (Figure 4 (c)).
From Table 1, it can be concluded that, for the same batch size, the test accuracy of the LSTM with stepped sampler rises faster than the traditional sampler LSTM. This can also be seen in Figure 4. Figure 5 and Figure 6 are the illustration of the traditional LSTM and the stepped sampler LSTM, when the batch sizes are all set to 20,15 respectively. The training loss of Figure 5 (c) and Figure 6 (c) converge faster than Figure 5 (a) and Figure 6 (a) , which denotes that, our method may have a broad-spectrum effect on machine learning. The test accuracy score of Figure 6 (c) is higher than Figure 6 (a), which denotes that, the stepped sampler LSTM may have higher test accuracy than the traditional sampler LSTM, when the batch size is 15, step size is 10, step stride is 5 of the stepped sampler LSTM, and the traditional sampler LSTM is with batch size 15. It can be seen that, there is a large jitter when the epoch is about 100 in Figure 6 (a). Figure 6 (b) and Figure 6 (c) have no large jitter after the epoch is about 60. The training loss of Figure 6 (c) drops faster than Figure 6 (a). The test accuracy of the three models is shown in Table 2. From Figure 6 we can see that, the training loss of the stepped sampler model still converges faster than the traditional sampler model.
In our experiments, most LSTM models with the stepped sampler have a more stable convergence of training loss, compared with the traditional LSTM models with the same batch size. Figure 4 (a) and Figure 6 (a) are the loss curves of the traditional batch sampler, it can be seen that, the loss curves have large jitters after the convergence. Other loss curves in Figure 4 and Figure 6 are stabler after the convergence. The stepped sampler LSTM may have a higher test accuracy than the traditional sampler LSTM, in the same batch size, which can reach the value of 0.639 (Table 2).
Our test uses the shuffle operation. ShuffleNet (Zhang et al., 2018) proves that the shuffle operation can improve the image detection mAP. We analyse that, the reason should be that the shuffle operation can reduce the correlation. According to our definition in Appendix A, the correlation
is the temporal information. Therefore, we consider that the shuffle operation can reduce the temporal information. If the shuffle operation is not used during detection, the frames are sequential. We believe that this continuity will have a certain impact on the model with temporal information. Since the test data is shuffled, there should be less temporal information among the data, the test results may reflect the detection effect better. The literature (Zhou et al., 2018) proves that, shuffle operation makes little impact on UCF101, and we think there would be no disadvantages for us to use the shuffle operation when testing.
4.4 THE TRAINING TIME
A batch is divided into multiple sub-batches might prolong the training time. However, since the repeated data are the same, the training time of a sub-batch is much shorter than an ordinary batch. Therefore, for the total training time, the stepped sampler and the traditional sampler are almost the same. For example, when the batch sizes are all set to 15, the stepped sampler with the step stride of 5 and the traditional batch sampler both take about 60 hours for training with our experimental conditions. Moreover, we believe that the training epoch which the stepped sampler needs might be less than that of the traditional sampler.
Algorithm 1: The stepped sampler Input: Dataset, batch size L, step size m, step stride n, and L > m ≥ n Output: Stepped sub-batch of the dataset Initialize the dataset by Sequential sampler of PyTorch; for Batch = 1, 2, · · · , |len(Batchsampler)| do // use batch sampler to traverse all data
Initialize empty set step batch[]; for idx = 1, 2, · · · , L do // traverse the elements in a batch of the batch sampler
output the idx-th item batch[idx] into step batch[]; idx+ = 1; if len(step batch[]) == m then // when the size of step batch reaches m
return step batch[]; // output the sub-batch Reset step batch[] to empty set; idx = idx−m+ n; // move the coordinate to the next sub-batch by stride n
end end
end
5 DISCUSSION
The experiment is to study the detection effect of the proposed sampler, which simulates one of the human brain memory law, repeating the input, to use the temporal information (we think it is the correlation in/between frames) of videos.
As the data are partly repeated inputted, it may be equivalent to the timely knowledge review of human brain, which strengthens the memory of the LSTM network, and reduces the information forgetting. The process is very similar with human learning, which was revealed by Hermann Ebbinghaus, and it is illustrated in Subsection 3.1. LSTM can selectively memorize the temporal information, which is human-like.
From Figure 4, the repeating times of the stepped sampler is not the more the better, as shown in Figure 4 (b), the stride is 1, and the convergence speed of the model is not improved much. In addition, the repeating times of the stepped sampler is not the less the better. As shown in Figure 4 (f), the convergence speed of the model is even slower. Figure 5 (b) also seems to show this. The phenomenon is the same with human learning. Too much repetitive input and too little repetitive input would not improve the learning effect of human. The experiments seem to verify the similarity between machine learning and human learning. We assume that artificial neural networks have human-like characteristics. What is the best spaced repetition, is still need to be studied. We assume that, it should be “one solution to one issue”, just like human.
Temporal information also seems to have human-like characteristics. Temporal information is the correlation of temporal sequences, analogous to human learning, it is the correlation of knowledge. In human learning, one of the important learning methods is to use the correlation of knowledge, and using the temporal information may also be one of the important learning methods of machine learning.
6 CONCLUSION
We refer to the human memory rules, and propose the stepped sampler, a repeating input method which uses the timely review approach. The timely review approach was proposed by Ebbinghaus, and is used to strengthen human memory and learning. In our experiments, this method has a better promotion on the detection effect of LSTM. The experimental results show that, compared with the traditional sampler, the training loss of the stepped sampler converges faster, and is more stable after the convergence, i.e., there is no large jitter after convergence. The test accuracy of the model with the stepped sampler also reaches a high point faster and is more stable either. When the batch size is 15, the test accuracy of the stepped sampler LSTM is significantly higher than that of the traditional sampler with the same batch size. We analyzed the algorithm of stepped sampler and got several equations. Ebbinghaus also pointed out that utilizing the correlations between knowledge is another key to learn better. We believe that this part repetition of the sampler enhances the correlation of the input frames, thereby enhancing the temporal information of the input frames, from our definition of the temporal information in Appendix A.
We try to use the mathematical language to describe the temporal information of video frames, which is shown in Appendix A. Since these mathematical descriptions do not involve specific artificial neural networks, the parameters of neural network are not added to the equations.
We try to use some human learning methods to study artificial neural networks. Compared with the traditional sampler, the stepped sampler LSTM has a faster learning effect, and has a higher test accuracy under certain parameters. The results show that, there may be a close relationship between biological neural network and artificial neural network, whatever in structure and even in principle. How to improve human learning is different for each individual, and the test accuracy of our experiment may illustrate this point, and we believe that is why not all the stepped sampler LSTM’s test accuracy is higher than the traditional sampler. The attention mechanism (Vaswani et al., 2017) may be also inspired by human learning methods. Transfer learning (Bozinovski & Fulgosi, 1976) is using old knowledge to learn new knowledge, which may be inspired by human learning methods either. We believe that, artificial neural networks seem to have human-like characteristics, and human learning and machine learning seem to have some similarities.
ACKNOWLEDGMENTS
This work was supported in part by the National Natural Science Foundation of China under Grant 61773360.
B THE APPLICATION OF THE EQUATIONS OF THE TEMPORAL
INFORMATION IN THE EXPERIMENT
In this section, we try to apply the equation in Appendix A to analyze the experimental results in Subsection 4.3.
The test accuracy in the experiment is the detection result, and the result is of different frames sent into the model. Thus, the test accuracy can be approximately regarded as the temporal information between frames, i.e., the test accuracy ≈ Tbf . The analysis is as follows. Since the UCF101 dataset is one object in one video, Equation 7 can be transformed into Tbf = TA(nf |pf) = R(Apf ⋂ Anf )
R(Apf ) . The test accuracy in our experiment is
essentially the Intersection over Union (IoU) of the bounding boxes. Therefore,
The test accuracy = IoU = Area(Apf
⋂ Anf )
Area(Apf ⋃ Anf ) =
Area(Apf ⋂ Anf )
area of frame Area(Apf ⋃ Anf )
area of frame
= R(Apf
⋂ Anf )
R(Apf ⋃ Anf )
≈ R(Apf ⋂ Anf )
R(Apf ) = Tbf
(13)
In the above equation, since the position of the objects in the UCF101 dataset does not change much between the previous and the next frames, for the area occupied by the objects, the union set of the objects between the previous and the next frames is approximate the same as the previous frame, which can be approximately regarded as R(Apf ⋃ Anf ) ≈ R(Apf ).
C THE ROLE OF APPENDIX A IN THE PAPER
In Appendix A, one of our basic view points is that temporal information is a kind of correlation. In Subsection 3.1, Ebbinghaus proposed that one of the ways to improve human learning is to make good use of the correlations between knowledge. When this correlation transfers to the machine learning of video, it should be the video temporal information. Therefore, making good use of video temporal information is the key point to video detection. The sampler we proposed can enhance the temporal information.
Moreover, in Subsection 4.3, we analyzed the shuffle operation and also applied this view. Since shuffle can reduce correlations between the input data, it can also reduce the temporal information. Therefore, the interference of the temporal information to the test can also be reduced. The temporal information is undoubtedly helpful for training, since training needs the temporal information to enhance learning. However, if it is also helpful for testing, it will undoubtedly increase the test accuracy. Therefore, we added a shuffle operation to the test, for reducing the test accuracy and make the results more objective.
In Section 5, we applied this view either, and pointed out that the key of knowledge engineering lies in knowledge correlation, and for video detection and language processing, it lies in the temporal information. The above is the reason why we put forward the appendix of video temporal information. | 1. What are the main flaws in the paper's approach to representing temporal information?
2. How does the author criticize the method used for action recognition in the paper?
3. What is the issue with the evaluation process of the paper?
4. Does the paper provide sufficient contribution to the field of action recognition?
5. What is the significance of shuffling data in the context of testing a model? | Summary Of The Paper
Review | Summary Of The Paper
This paper proposes a sampler for LSTM video models. The sampler works by repeating frames in a training batch. The method is evaluated for the task of action recognition on the UCF 101 dataset.
Review
This paper presents many substantial flaws, which I detail below.
Anachronistic temporal information representation
The authors start providing a 3 page long formulation of “video temporal information”. The authors base their formulation on the correlation of objects between frames. While this might make sense for some actions, such correlation is solely modelled as the ratio between an object bounding box and the video frame size. The authors then go on with many equations (10 of them) based on the Bayes theorem and Mutual information theory to “model video temporal information”, again based only on bounding boxes ratios.
This approach is completely oblivious of what has been done in the past 20 years to model actions. No optical flow, no visual features, no CNNs are mentioned at all. In fact, the authors do not include a single paper related to action recognition in their related work. Moreover, it is not clear how this temporal information formulation is even used, if it is used at all in this work. Indeed, after presenting their formulation, the authors continue saying they use a simple CNN with an LSTM to recognise actions. My guess is that the object-box-based formulation is not used, and that CNN features are instead employed, which leaves me to wonder why such formulation is presented in the first place.
To summarise this point, Section 3.2 (Video Temporal Information) is entirely disconnected to the rest of the paper. The temporal information formulation is overly simplistic and completely unaware of any previous work in the field. It is not clear whether this formulation is even employed.
Insufficient contribution
The core idea of this work is to repeat frames in a batch when training an LSTM to recognise actions. This very simple idea does not constitute a sufficient contribution, because it essentially corresponds to augmenting the training data with more frames, which naturally speeds convergence. In short, no novel contribution is presented in any way since the method is just a sampler that repeats frame in a trivial way.
Flawed and incomplete evaluation
Quoting from page 6:
Our test uses the shuffle operation, to make the test results more objective. Since the test data is shuffled, there should be less temporal between the data, the test results may depend on the sampler and model only, which could reflect the detection effect well.
Firstly, it is unclear how shuffling would make test results more “subjective”. Secondly, it is arguably incorrect to test a model shuffling the data as this introduces noise in the evaluation. Finally and most importantly, it is not clear why the method would perform better or worse by shuffling data. This confusion stems from the fact that the authors do not specify whether they shuffle the batch (in which case shuffling makes no sense during inference) or whether they shuffle frames within a sequence (which would be a major flaw since the videos would be altered).
Besides, the method is evaluated only on a single dataset without comparison to previous work. Results show that the proposed method under-performs in many cases as well. |
ICLR | Title
A stepped sampling method for video detection using LSTM
Abstract
Artificial neural networks are considered to simulate the human neural networks, and achieves great progress on object detection, natural language processing (NLP), image generation, etc. Hermann Ebbinghaus proposed the law of human memory and how to improve human learning in 1885. Inspiring from Ebbinghaus’ work, we propose a stepped sampler based on the “repeated input”, which is Ebbinghaus’ contribution that how to strengthen the learning. We repeatedly inputted data to the LSTM model stepwise in a batch. The stepped sampler is used to strengthen the ability of fusing the temporal information in LSTM. We tested the stepped sampler on the LSTM offered by PyTorch. Compared with the traditional sampler of PyTorch, such as sequential sampler, batch sampler, the training loss of the proposed stepped sampler converges faster in the model training, and the training loss after convergence is more stable, which means that there is no large jitter after the convergence. Meanwhile, it can maintain a higher test accuracy, compared with the traditional sampler. We quantified the algorithm of the stepped sampler. We assume that, the artificial neural networks may have human-like characteristics, and human learning method could be used for machine learning. Our code will be available online soon.
N/A
Artificial neural networks are considered to simulate the human neural networks, and achieves great progress on object detection, natural language processing (NLP), image generation, etc. Hermann Ebbinghaus proposed the law of human memory and how to improve human learning in 1885. Inspiring from Ebbinghaus’ work, we propose a stepped sampler based on the “repeated input”, which is Ebbinghaus’ contribution that how to strengthen the learning. We repeatedly inputted data to the LSTM model stepwise in a batch. The stepped sampler is used to strengthen the ability of fusing the temporal information in LSTM. We tested the stepped sampler on the LSTM offered by PyTorch. Compared with the traditional sampler of PyTorch, such as sequential sampler, batch sampler, the training loss of the proposed stepped sampler converges faster in the model training, and the training loss after convergence is more stable, which means that there is no large jitter after the convergence. Meanwhile, it can maintain a higher test accuracy, compared with the traditional sampler. We quantified the algorithm of the stepped sampler. We assume that, the artificial neural networks may have human-like characteristics, and human learning method could be used for machine learning. Our code will be available online soon.
1 INTRODUCTION
The emergence of convolutional neural networks (CNN) (LeCun et al., 1989) has improved the selflearning ability of artificial neural networks. Recurrent Neural Network (RNN) (Mikolov et al., 2010) is used to process the temporal information data. RNN takes the output of the previous time period as the input of the next time period, effectively using the temporal information of the input sequence.
RNN sometimes may have the problem of gradient disappearance or gradient explosion. Hochreiter et al. (Hochreiter & Schmidhuber, 1997) proposed LSTM. LSTM adds gates to RNN, thus it can effectively avoid the problem of gradient disappearance or explosion. These gates include the forgetting gates, the input gates, and the output gates. The forgetting gate seems to be the most important among them. LSTM may simulate the memory process of human brain. Human brain selectively forgets some information for learning better.
Consider that one of the principles of neural networks may be learned from biological neural networks, for those artificial neural networks with the memory effects, such as LSTM, learning from the memory method of human, which is the repeated input and timely review, we study the effect of this method with repeated input on LSTM detection results, without considering changing the LSTM network structure.
In this study, we learn the effect of the proposed input method on neural networks with memory characteristics, such as LSTM. Specifically, it is to repeatedly input training data by simulating the “repeated input and timely review” method of the human memory, and the “repeated input and timely review” method is proposed by Hermann Ebbinghaus (Ebbinghaus, 1913) in 1885, which is the “Increasing Memory Strength and Rate of Learning” in his literature.
1.1 OUR CONTRIBUTION
Our views in this paper mainly include the following 3 aspects:
a) A novel sampler is proposed, which implements sampling in a circular and stepwise manner. Compared with the traditional sampler, the loss curve of the LSTM model using this stepped sampler converges faster in training, and is more stable after the convergence, namely there is no large jitter after the convergence. Moreover, its test accuracy curve is more stable either, which has no jitter. When the batch size is 15, the test accuracy of the stepped sampler LSTM is much higher than that of the traditional sampler with the same parameters.
b) The idea of this sampler comes from the laws of human memory, which was proposed by Ebbinghaus (Ebbinghaus, 1913). We courageously assume that, other human learning methods can also be applied to machine learning. One example is the proposal of the attention mechanism (Vaswani et al., 2017). Moreover, we believe that artificial neural networks have human-like characteristics from the experimental performance.
c) We try to use mathematical language to describe the temporal information of the video frames. We try to apply the mathematical equations to our experimental results, and analyze that the test accuracy in the experiment is the temporal information between video frames. The derivation process is shown in Appendix A and Appendix B.
2 RELATED WORK
Gibbs sampling is one of the earlier data sampling algorithms, which is proposed by Geman et al. (Geman & Geman, 1984) in 1984. Gibbs sampling is to make the probability of the data sample approximately equal to the required probability distribution via iterations. Gibbs sampling randomly selects data from an initial input sequence, and iterates according to the specified conditional probabilities, which are related to the required probability distribution of the final sampling data. After iterations, Gibbs sampling generates data which is consistent with the required probability distribution. Hu et al. (Hu et al., 2018) used neural networks to generate a sampler, which transfer the initial data distribution to the target distribution. The method can generate the sampling data at the same time of training. This method works with the un-normalized probability density function. Wang et al. (Wang et al., 2018) used Generative Adversarial Nets (GAN) (Goodfellow et al., 2014) to generate the negative samples. The approach is the first to combine GAN with the negative sampling method, which improves the training effect of the streaming recommend system. Chu et al. (Chu et al., 2019) proposed a novel sampler that can sample both the positive and the negative data from the input data sequences, so as to let the classifier utilize the Regions of Interests and the background of the data. The sampler is used in the few-shot image classifier, which uses the reinforcement learning method. The reinforcement learning algorithm (Kaelbling et al., 1996) needs to continuously select the regions of interests from the images, subsequently to recognize the content of the Regions of Interests. Sampling these Regions of Interests can improve the efficiency of reinforcement learning, for the reason of the reduction of the input samples. Muhammad et al. (Muhammad et al., 2021) proposed a bi-directional long short-term memory (BiLSTM) with attention mechanism and a dilated convolutional neural network (DCNN) to perform action recognition, which outperformed the state-of-the-art methods. Kwon et al. (Kwon et al., 2021) proposed a spatio-temporal neighbourhood learning method on action recognition, which performed the state-of-the-art.
3 MATERIALS AND METHODS
This paper is from the perspective of data input, rather than the neural network structure, and study the impact of the memory effect on the temporal sequence neural networks (such as LSTM). The process simulates the method of enhancing the memory process of human brain, repeats the input data in a stepped way. The method is proposed by Hermann Ebbinghaus (Ebbinghaus, 1913) called “Increasing rate of learning” in his book. The specific mode we used was the wheel tactic (Smith, 1994) when we recited words, by establishing a novel data sampler in the LSTM model training. The dataset in the experiment is UCF101 (Soomro et al., 2012), which is a human action recognition video dataset. The name of each folder indicates the annotation of the video.
3.1 EBBINGHAUS FORGETTING CURVE
Ebbinghaus forgetting curve (Ebbinghaus, 1913) describes the memory effect of human brain over time, which was proposed by Hermann Ebbinghaus in 1885. This theory reveals the human memory law. It is also the law of human learning. That is, the loss of human memory when learning new knowledge is drop fast first and slow later. Ebbinghaus also pointed out that, timely review and repeated input are the key point to prevent forgetting, consolidate knowledge, and learn better. Figure 1 illustrates Ebbinghaus forgetting curve, and timely review can reduce the knowledge forgetting, which makes the learning better. Based on Ebbinghaus forgetting curve on the human brain, we simulated Ebbinghaus’ method on machine learning. We believe that the experimental results in Section 4 could prove that there is a certain correlation between human learning and machine learning, since the machine learning method with timely review and spaced repeat has a faster learning effect, compared with the machine learning without the human-like method.
Ebbinghaus also found that, making good use of the correlations between knowledge is another key point for enhancing learning. We definite these correlations are temporal information in Appendix A. Thereby enhancing the use of temporal information is the key to video detection, natural language processing (NLP), etc. We believe that, the partly repeated input of the stepped sampler enhances the correlation and the temporal information.
3.2 LSTM
The LSTM architecture we used in this paper is to start with a CNN backbone. The CNN backbone has four convolutional layers. The dimension of convolution kernels of each convolutional layer is 32, 64, 128, 256. The size of each convolution kernel is 5 × 5, 3 × 3, 3 × 3, 3×3, the stride of the convolution kernel is 2, and the padding is 0. Each convolutional layer is followed by a batch normalization (BN) (Ioffe & Szegedy, 2015) layer and a ReLU layer. The last part of the CNN model is 3 fully connected (FC) layers, which use dropout function. The dimensions of the 3 FC layers are 1024, 768 and 512 respective-
ly. The LSTM used in the paper is the model existed in PyTorch. The input dimension of LSTM is 512, the hidden layer dimension is 512, and the number of hidden layers is 3. The next is two fully connected (FC) layers followed by dropout function. The dimension of the FC layers is 256. The dropout rate of the CNN backbone and LSTM are both 0.3.
3.3 THE STEPPED SAMPLER
Our experiment is compared with the common sampler. Common sampler in PyTorch (Paszke et al., 2019) include random sampler, weighted random sampler, batch sampler, etc. Batch sampling is nearly the most commonly used. The previous research is to add memory units to deep learning networks, such as RNN, LSTM, etc. Analogous to human learning, an important point is repetition. And the sampler should be an appropriate way to simulate the “repetition”, since the data in each batch can be designed to be input repeatedly. We suppose that, “repetition” is important, not only for human beings, but also for computers. To make computers better use the “repetition”, analogizing the way how we recite words, we propose a “stepped” repetition input method, which is the stepped sampler.
The structure of the proposed stepped sampler is illustrated in Figure 2. It is established on the batch sampler. The stepped sampler divides a batch into some sub-batches. Like human memory, this sampler adopts the principle of adjacent repetition (Crowder, 1968), namely, the back of the previous sub-batch is the same with the front of the next sub-batch.
The structure of the stepped sampler shows that, the input data of different batches is partly duplicated. The repeated input seems to increase the redundancy, but the experimental results show that, with our experimental environment, this method can accelerate the convergence of LSTM model. There is a stride between the previous sub-batch and the next sub-batch. The stride size n can be set manually. We believe that this part repetition enhances the correlation of the input frames, thereby enhancing the temporal information of the input frames, according to our definition of the temporal information in Appendix A. Section 4 describes the comparative experiments on the sampler with different stride size.
3.4 THE ALGORITHM OF THE STEPPED SAMPLER
The stepped sampler is designed on the basis of batch sampler. The algorithm is designed to implement stepped sampling within each batch via the batch sampler. The workflow of the algorithm is as follows: the data first goes through the sequential sampler of PyTorch, then, they are processed to batches by the batch sampler. Finally, the data in each batch are divided into sub-batches with the same strides by the stepped sampler.
As shown in Figure 2, assuming that the iteration number of the stepped sampler in a batch is d, it can be concluded from the figure:
L = m+ n× d (1) It can be deduced that, the stepped sampler iteration number per batch, d is:
d = L−m
n (2)
Equation 2 is used as the iteration number within a batch in Algorithm 1. d is computed by the algorithm when L, m, n is determined. If d is not an integer, PyTorch will round down to ensure that d is an integer. The number of batches is calculated by the framework, and the number of epochs is set manually. The algorithm of the proposed sampler is shown in Algorithm 1. The idea is to implement the stepped sampler in each batch after the sequential sampler and batch sampler of PyTorch. Line 12 of Algorithm 1 is that, after each previous stepped sub-batchs output, the starting coordinate is moved by n (step stride) data from the starting position of the previous sub-batch.
4 RESULTS
4.1 EXPERIMENT SETUP
The system used in the experiment was a workstation with 32 GB CPU RAM and a NVIDIA GeForce 1080ti GPU. The processor was Intel i7 8700, the operating system was Ubuntu 16.04 64 bits. PyTorch version used in the experiment was 1.0.1, Python version was 3.6, Numpy version was 1.20.4, Sklearn version was 0.20.4, and Matplotlib, Pandas, tqdm were implemented as the software environment. The reason why we chose old version PyTorch is, the performance of the old version may not be so powelful, however, the experimental effect may be better, since the contrast of the results may be larger. And since the old version may not have too many functions, it can focus on the factor of “repeat input”, without the interferences from other irrelevant factors.
We searched the relevant literature, and found that there may be no LSTM literature that applies human learning methods to machine learning for now. Therefore, the experiment we designed is a comparison experiment, which is an ordinary CNN-LSTM, with or without the stepped sampler, and the other parameters are all the same. One of the advantages of this is that, it can reduce the influence of other irrelevant factors, and can specifically concentrate on the machine learning results of human learning methods. And the human learning methods is the timely review proposed by Ebbinghaus.
4.2 TRAINING
The detection accuracy evaluation and cross-entropy loss were used for the training of the models. The accuracy evaluation in the experiment used the accuracy score tool in the Sklearn package of Python. The cross-entropy loss used the function of PyTorch. The accuracy and loss were graphically depicted in Figure 4, 5 and 6. The accuracy and loss were computed every epoch. The dataset of
UCF101 was split into training set and test set by a ratio of 3:1. After training, an overall accuracy and loss were computed by the test set, to evaluate the performance of the models. The epoch was set to 150. We used Adam as the optimization algorithm. We experimented different batch sizes and step sizes, which was changing the size of L, m and the step stride n shown in Figure 2.
Our experiment is trained from scratch. Training from scratch may decrease the test accuracy, but it can eliminate interference and focus on the stepped sampler. The learning rate was set to 0.0001. The momentum was set to 0.01. The operators of batch normalization (BN) and ReLU activation were used after each convolutional layer in the CNN backbone. The CNN backbone is not shown in Figure 2. The data transformation was applied to enhance the network. The input frames were transformed into 256× 342 pixels.
4.3 EXPERIMENT RESULTS
Figure 3 are the visualized results.We tested the sampler of batch size 25. Figure 4 presents the experimental results. Each subfigure shows the training loss and test accuracy of the models. The difference of the models is only the sampler, for comparing the results only caused by the sampler. Figure 4 (a) is the model of traditional sampler, which is the sequential sampler and the batch sampler in PyTorch, and the other in Figure 4 are the models of the proposed stepped sampler. Figure 4 (b), (c), (d), (e), (f) are only different from the step stride for comparing. From Figure 4, we consider that step stride 2 (batch size 25, step size 20, in Figure 4 (c)) is the optimal. The training loss in Figure 4 (a) has many jitters, even when the epoch is more than 110, while the training loss in the other subfigures are much smoother, and can converge earlier than the traditional batch sampler model (Figure 4 (a)). Nonetheless, the test accuracy score of the traditional batch sampler model (Figure 4 (a)) is slightly higher. The test accuracy of Figure 4 (a) can be 0.656, while the test accuracy of the model with stride 2 stepped sampler (Figure 4 (c)) can be 0.603. The test accuracy of the models in Figure 4 is shown in Table 1.
From Figure 4, the following could be concluded: a) In the model training, LSTM with the stepped sampler converges faster than LSTM with the traditional sampler, and the convergence effect is better, i.e., there is no large jitter after the drop. b) When the batch size and the step size are fixed, the smaller the step stride was, the worse the detection effect became. Similarly, the larger the step stride was, the worse the detection effect became either. If the batch size and the step size are fixed, the detection effect seems to be a normal distribution of the step stride. c) However, LSTM with the traditional sampler whose batch size is 25 has a higher test accuracy on the test set. Although this value is not much higher than the optimal stepped sampler model (Figure 4 (c)).
From Table 1, it can be concluded that, for the same batch size, the test accuracy of the LSTM with stepped sampler rises faster than the traditional sampler LSTM. This can also be seen in Figure 4. Figure 5 and Figure 6 are the illustration of the traditional LSTM and the stepped sampler LSTM, when the batch sizes are all set to 20,15 respectively. The training loss of Figure 5 (c) and Figure 6 (c) converge faster than Figure 5 (a) and Figure 6 (a) , which denotes that, our method may have a broad-spectrum effect on machine learning. The test accuracy score of Figure 6 (c) is higher than Figure 6 (a), which denotes that, the stepped sampler LSTM may have higher test accuracy than the traditional sampler LSTM, when the batch size is 15, step size is 10, step stride is 5 of the stepped sampler LSTM, and the traditional sampler LSTM is with batch size 15. It can be seen that, there is a large jitter when the epoch is about 100 in Figure 6 (a). Figure 6 (b) and Figure 6 (c) have no large jitter after the epoch is about 60. The training loss of Figure 6 (c) drops faster than Figure 6 (a). The test accuracy of the three models is shown in Table 2. From Figure 6 we can see that, the training loss of the stepped sampler model still converges faster than the traditional sampler model.
In our experiments, most LSTM models with the stepped sampler have a more stable convergence of training loss, compared with the traditional LSTM models with the same batch size. Figure 4 (a) and Figure 6 (a) are the loss curves of the traditional batch sampler, it can be seen that, the loss curves have large jitters after the convergence. Other loss curves in Figure 4 and Figure 6 are stabler after the convergence. The stepped sampler LSTM may have a higher test accuracy than the traditional sampler LSTM, in the same batch size, which can reach the value of 0.639 (Table 2).
Our test uses the shuffle operation. ShuffleNet (Zhang et al., 2018) proves that the shuffle operation can improve the image detection mAP. We analyse that, the reason should be that the shuffle operation can reduce the correlation. According to our definition in Appendix A, the correlation
is the temporal information. Therefore, we consider that the shuffle operation can reduce the temporal information. If the shuffle operation is not used during detection, the frames are sequential. We believe that this continuity will have a certain impact on the model with temporal information. Since the test data is shuffled, there should be less temporal information among the data, the test results may reflect the detection effect better. The literature (Zhou et al., 2018) proves that, shuffle operation makes little impact on UCF101, and we think there would be no disadvantages for us to use the shuffle operation when testing.
4.4 THE TRAINING TIME
A batch is divided into multiple sub-batches might prolong the training time. However, since the repeated data are the same, the training time of a sub-batch is much shorter than an ordinary batch. Therefore, for the total training time, the stepped sampler and the traditional sampler are almost the same. For example, when the batch sizes are all set to 15, the stepped sampler with the step stride of 5 and the traditional batch sampler both take about 60 hours for training with our experimental conditions. Moreover, we believe that the training epoch which the stepped sampler needs might be less than that of the traditional sampler.
Algorithm 1: The stepped sampler Input: Dataset, batch size L, step size m, step stride n, and L > m ≥ n Output: Stepped sub-batch of the dataset Initialize the dataset by Sequential sampler of PyTorch; for Batch = 1, 2, · · · , |len(Batchsampler)| do // use batch sampler to traverse all data
Initialize empty set step batch[]; for idx = 1, 2, · · · , L do // traverse the elements in a batch of the batch sampler
output the idx-th item batch[idx] into step batch[]; idx+ = 1; if len(step batch[]) == m then // when the size of step batch reaches m
return step batch[]; // output the sub-batch Reset step batch[] to empty set; idx = idx−m+ n; // move the coordinate to the next sub-batch by stride n
end end
end
5 DISCUSSION
The experiment is to study the detection effect of the proposed sampler, which simulates one of the human brain memory law, repeating the input, to use the temporal information (we think it is the correlation in/between frames) of videos.
As the data are partly repeated inputted, it may be equivalent to the timely knowledge review of human brain, which strengthens the memory of the LSTM network, and reduces the information forgetting. The process is very similar with human learning, which was revealed by Hermann Ebbinghaus, and it is illustrated in Subsection 3.1. LSTM can selectively memorize the temporal information, which is human-like.
From Figure 4, the repeating times of the stepped sampler is not the more the better, as shown in Figure 4 (b), the stride is 1, and the convergence speed of the model is not improved much. In addition, the repeating times of the stepped sampler is not the less the better. As shown in Figure 4 (f), the convergence speed of the model is even slower. Figure 5 (b) also seems to show this. The phenomenon is the same with human learning. Too much repetitive input and too little repetitive input would not improve the learning effect of human. The experiments seem to verify the similarity between machine learning and human learning. We assume that artificial neural networks have human-like characteristics. What is the best spaced repetition, is still need to be studied. We assume that, it should be “one solution to one issue”, just like human.
Temporal information also seems to have human-like characteristics. Temporal information is the correlation of temporal sequences, analogous to human learning, it is the correlation of knowledge. In human learning, one of the important learning methods is to use the correlation of knowledge, and using the temporal information may also be one of the important learning methods of machine learning.
6 CONCLUSION
We refer to the human memory rules, and propose the stepped sampler, a repeating input method which uses the timely review approach. The timely review approach was proposed by Ebbinghaus, and is used to strengthen human memory and learning. In our experiments, this method has a better promotion on the detection effect of LSTM. The experimental results show that, compared with the traditional sampler, the training loss of the stepped sampler converges faster, and is more stable after the convergence, i.e., there is no large jitter after convergence. The test accuracy of the model with the stepped sampler also reaches a high point faster and is more stable either. When the batch size is 15, the test accuracy of the stepped sampler LSTM is significantly higher than that of the traditional sampler with the same batch size. We analyzed the algorithm of stepped sampler and got several equations. Ebbinghaus also pointed out that utilizing the correlations between knowledge is another key to learn better. We believe that this part repetition of the sampler enhances the correlation of the input frames, thereby enhancing the temporal information of the input frames, from our definition of the temporal information in Appendix A.
We try to use the mathematical language to describe the temporal information of video frames, which is shown in Appendix A. Since these mathematical descriptions do not involve specific artificial neural networks, the parameters of neural network are not added to the equations.
We try to use some human learning methods to study artificial neural networks. Compared with the traditional sampler, the stepped sampler LSTM has a faster learning effect, and has a higher test accuracy under certain parameters. The results show that, there may be a close relationship between biological neural network and artificial neural network, whatever in structure and even in principle. How to improve human learning is different for each individual, and the test accuracy of our experiment may illustrate this point, and we believe that is why not all the stepped sampler LSTM’s test accuracy is higher than the traditional sampler. The attention mechanism (Vaswani et al., 2017) may be also inspired by human learning methods. Transfer learning (Bozinovski & Fulgosi, 1976) is using old knowledge to learn new knowledge, which may be inspired by human learning methods either. We believe that, artificial neural networks seem to have human-like characteristics, and human learning and machine learning seem to have some similarities.
ACKNOWLEDGMENTS
This work was supported in part by the National Natural Science Foundation of China under Grant 61773360.
B THE APPLICATION OF THE EQUATIONS OF THE TEMPORAL
INFORMATION IN THE EXPERIMENT
In this section, we try to apply the equation in Appendix A to analyze the experimental results in Subsection 4.3.
The test accuracy in the experiment is the detection result, and the result is of different frames sent into the model. Thus, the test accuracy can be approximately regarded as the temporal information between frames, i.e., the test accuracy ≈ Tbf . The analysis is as follows. Since the UCF101 dataset is one object in one video, Equation 7 can be transformed into Tbf = TA(nf |pf) = R(Apf ⋂ Anf )
R(Apf ) . The test accuracy in our experiment is
essentially the Intersection over Union (IoU) of the bounding boxes. Therefore,
The test accuracy = IoU = Area(Apf
⋂ Anf )
Area(Apf ⋃ Anf ) =
Area(Apf ⋂ Anf )
area of frame Area(Apf ⋃ Anf )
area of frame
= R(Apf
⋂ Anf )
R(Apf ⋃ Anf )
≈ R(Apf ⋂ Anf )
R(Apf ) = Tbf
(13)
In the above equation, since the position of the objects in the UCF101 dataset does not change much between the previous and the next frames, for the area occupied by the objects, the union set of the objects between the previous and the next frames is approximate the same as the previous frame, which can be approximately regarded as R(Apf ⋃ Anf ) ≈ R(Apf ).
C THE ROLE OF APPENDIX A IN THE PAPER
In Appendix A, one of our basic view points is that temporal information is a kind of correlation. In Subsection 3.1, Ebbinghaus proposed that one of the ways to improve human learning is to make good use of the correlations between knowledge. When this correlation transfers to the machine learning of video, it should be the video temporal information. Therefore, making good use of video temporal information is the key point to video detection. The sampler we proposed can enhance the temporal information.
Moreover, in Subsection 4.3, we analyzed the shuffle operation and also applied this view. Since shuffle can reduce correlations between the input data, it can also reduce the temporal information. Therefore, the interference of the temporal information to the test can also be reduced. The temporal information is undoubtedly helpful for training, since training needs the temporal information to enhance learning. However, if it is also helpful for testing, it will undoubtedly increase the test accuracy. Therefore, we added a shuffle operation to the test, for reducing the test accuracy and make the results more objective.
In Section 5, we applied this view either, and pointed out that the key of knowledge engineering lies in knowledge correlation, and for video detection and language processing, it lies in the temporal information. The above is the reason why we put forward the appendix of video temporal information. | 1. What is the focus and contribution of the paper on neural network models?
2. What are the strengths of the proposed methodology, particularly in its technical soundness and empirical credibility?
3. Do you have any concerns regarding the key intuition behind the stepped sampling technique?
4. How does the reviewer assess the determination of d in Equation 11, particularly for specific values of L, m, and n?
5. What is the basis for the authors' claims of stabler convergence using stepped sampling compared to traditional methods? Can it be quantified?
6. Why did the authors choose PyTorch version 1.0.1, given the availability of newer versions?
7. Does the reviewer have any minor comments or suggestions for improving the paper's clarity and completeness? | Summary Of The Paper
Review | Summary Of The Paper
The authors present a stepped sampling to improve the learning capabilities of neural network models such as LSTM. Specifically, the stepped sampling procedure repeats the same input data in multiple batches, in other words, the batches "overlap" with one another in terms of the contained input data. This follows from the authors' argument that repeatedly providing the same data leads to faster and stable convergence in training, as well as higher accuracy in testing. The authors experimentally show these benefits of their stepped sampling procedure over traditional sampling techniques (e.g., random sampling) in the context of action detection from videos using LSTMs.
Review
Strengths
The proposed methodology is technically sound and clearly presented.
The experiments show promising results, lending good empirical credibility to the authors' claims.
Weaknesses
My main concern is that while the authors argue that their proposed sampling technique is more "human-like" or closer to "how humans learn from data", I did not find any references in the paper backing up this claim. Are there any studies in psychology, neuroscience, or any other relevant field that grounds the key intuition for stepped sampling, which is repeating the parts of the same input in different batches? Without such grounding, the claim can come across as being made post-hoc, following the experimental success of stepped sampling.
Since
d
in Eq. 11 is called the iteration number, I presumed it is a (non-negative) integer. How, then, is
d
determined for
L
=
25
,
m
=
20
, and strides
n
=
2
,
3
,
4
in the results shown in Figure 3?
The authors make claims of stepped sampling leading to stabler convergence compared to traditional sampling techniques in a few places in the paper (e.g., last paragraph in Section 4.3). Is there a way to quantify this claim, e.g., by plotting the rate of change of the loss function across the training epochs?
Minor comments
Is there any specific reason the authors worked with PyTorch version 1.0.1 (Section 4.1), which came out in early 2019, when much newer versions such as 1.8.0 have been available since early 2021?
For the sake of completeness, please specify that
L
>
m
≥
n
in Algorithm 1. |
ICLR | Title
A stepped sampling method for video detection using LSTM
Abstract
Artificial neural networks are considered to simulate the human neural networks, and achieves great progress on object detection, natural language processing (NLP), image generation, etc. Hermann Ebbinghaus proposed the law of human memory and how to improve human learning in 1885. Inspiring from Ebbinghaus’ work, we propose a stepped sampler based on the “repeated input”, which is Ebbinghaus’ contribution that how to strengthen the learning. We repeatedly inputted data to the LSTM model stepwise in a batch. The stepped sampler is used to strengthen the ability of fusing the temporal information in LSTM. We tested the stepped sampler on the LSTM offered by PyTorch. Compared with the traditional sampler of PyTorch, such as sequential sampler, batch sampler, the training loss of the proposed stepped sampler converges faster in the model training, and the training loss after convergence is more stable, which means that there is no large jitter after the convergence. Meanwhile, it can maintain a higher test accuracy, compared with the traditional sampler. We quantified the algorithm of the stepped sampler. We assume that, the artificial neural networks may have human-like characteristics, and human learning method could be used for machine learning. Our code will be available online soon.
N/A
Artificial neural networks are considered to simulate the human neural networks, and achieves great progress on object detection, natural language processing (NLP), image generation, etc. Hermann Ebbinghaus proposed the law of human memory and how to improve human learning in 1885. Inspiring from Ebbinghaus’ work, we propose a stepped sampler based on the “repeated input”, which is Ebbinghaus’ contribution that how to strengthen the learning. We repeatedly inputted data to the LSTM model stepwise in a batch. The stepped sampler is used to strengthen the ability of fusing the temporal information in LSTM. We tested the stepped sampler on the LSTM offered by PyTorch. Compared with the traditional sampler of PyTorch, such as sequential sampler, batch sampler, the training loss of the proposed stepped sampler converges faster in the model training, and the training loss after convergence is more stable, which means that there is no large jitter after the convergence. Meanwhile, it can maintain a higher test accuracy, compared with the traditional sampler. We quantified the algorithm of the stepped sampler. We assume that, the artificial neural networks may have human-like characteristics, and human learning method could be used for machine learning. Our code will be available online soon.
1 INTRODUCTION
The emergence of convolutional neural networks (CNN) (LeCun et al., 1989) has improved the selflearning ability of artificial neural networks. Recurrent Neural Network (RNN) (Mikolov et al., 2010) is used to process the temporal information data. RNN takes the output of the previous time period as the input of the next time period, effectively using the temporal information of the input sequence.
RNN sometimes may have the problem of gradient disappearance or gradient explosion. Hochreiter et al. (Hochreiter & Schmidhuber, 1997) proposed LSTM. LSTM adds gates to RNN, thus it can effectively avoid the problem of gradient disappearance or explosion. These gates include the forgetting gates, the input gates, and the output gates. The forgetting gate seems to be the most important among them. LSTM may simulate the memory process of human brain. Human brain selectively forgets some information for learning better.
Consider that one of the principles of neural networks may be learned from biological neural networks, for those artificial neural networks with the memory effects, such as LSTM, learning from the memory method of human, which is the repeated input and timely review, we study the effect of this method with repeated input on LSTM detection results, without considering changing the LSTM network structure.
In this study, we learn the effect of the proposed input method on neural networks with memory characteristics, such as LSTM. Specifically, it is to repeatedly input training data by simulating the “repeated input and timely review” method of the human memory, and the “repeated input and timely review” method is proposed by Hermann Ebbinghaus (Ebbinghaus, 1913) in 1885, which is the “Increasing Memory Strength and Rate of Learning” in his literature.
1.1 OUR CONTRIBUTION
Our views in this paper mainly include the following 3 aspects:
a) A novel sampler is proposed, which implements sampling in a circular and stepwise manner. Compared with the traditional sampler, the loss curve of the LSTM model using this stepped sampler converges faster in training, and is more stable after the convergence, namely there is no large jitter after the convergence. Moreover, its test accuracy curve is more stable either, which has no jitter. When the batch size is 15, the test accuracy of the stepped sampler LSTM is much higher than that of the traditional sampler with the same parameters.
b) The idea of this sampler comes from the laws of human memory, which was proposed by Ebbinghaus (Ebbinghaus, 1913). We courageously assume that, other human learning methods can also be applied to machine learning. One example is the proposal of the attention mechanism (Vaswani et al., 2017). Moreover, we believe that artificial neural networks have human-like characteristics from the experimental performance.
c) We try to use mathematical language to describe the temporal information of the video frames. We try to apply the mathematical equations to our experimental results, and analyze that the test accuracy in the experiment is the temporal information between video frames. The derivation process is shown in Appendix A and Appendix B.
2 RELATED WORK
Gibbs sampling is one of the earlier data sampling algorithms, which is proposed by Geman et al. (Geman & Geman, 1984) in 1984. Gibbs sampling is to make the probability of the data sample approximately equal to the required probability distribution via iterations. Gibbs sampling randomly selects data from an initial input sequence, and iterates according to the specified conditional probabilities, which are related to the required probability distribution of the final sampling data. After iterations, Gibbs sampling generates data which is consistent with the required probability distribution. Hu et al. (Hu et al., 2018) used neural networks to generate a sampler, which transfer the initial data distribution to the target distribution. The method can generate the sampling data at the same time of training. This method works with the un-normalized probability density function. Wang et al. (Wang et al., 2018) used Generative Adversarial Nets (GAN) (Goodfellow et al., 2014) to generate the negative samples. The approach is the first to combine GAN with the negative sampling method, which improves the training effect of the streaming recommend system. Chu et al. (Chu et al., 2019) proposed a novel sampler that can sample both the positive and the negative data from the input data sequences, so as to let the classifier utilize the Regions of Interests and the background of the data. The sampler is used in the few-shot image classifier, which uses the reinforcement learning method. The reinforcement learning algorithm (Kaelbling et al., 1996) needs to continuously select the regions of interests from the images, subsequently to recognize the content of the Regions of Interests. Sampling these Regions of Interests can improve the efficiency of reinforcement learning, for the reason of the reduction of the input samples. Muhammad et al. (Muhammad et al., 2021) proposed a bi-directional long short-term memory (BiLSTM) with attention mechanism and a dilated convolutional neural network (DCNN) to perform action recognition, which outperformed the state-of-the-art methods. Kwon et al. (Kwon et al., 2021) proposed a spatio-temporal neighbourhood learning method on action recognition, which performed the state-of-the-art.
3 MATERIALS AND METHODS
This paper is from the perspective of data input, rather than the neural network structure, and study the impact of the memory effect on the temporal sequence neural networks (such as LSTM). The process simulates the method of enhancing the memory process of human brain, repeats the input data in a stepped way. The method is proposed by Hermann Ebbinghaus (Ebbinghaus, 1913) called “Increasing rate of learning” in his book. The specific mode we used was the wheel tactic (Smith, 1994) when we recited words, by establishing a novel data sampler in the LSTM model training. The dataset in the experiment is UCF101 (Soomro et al., 2012), which is a human action recognition video dataset. The name of each folder indicates the annotation of the video.
3.1 EBBINGHAUS FORGETTING CURVE
Ebbinghaus forgetting curve (Ebbinghaus, 1913) describes the memory effect of human brain over time, which was proposed by Hermann Ebbinghaus in 1885. This theory reveals the human memory law. It is also the law of human learning. That is, the loss of human memory when learning new knowledge is drop fast first and slow later. Ebbinghaus also pointed out that, timely review and repeated input are the key point to prevent forgetting, consolidate knowledge, and learn better. Figure 1 illustrates Ebbinghaus forgetting curve, and timely review can reduce the knowledge forgetting, which makes the learning better. Based on Ebbinghaus forgetting curve on the human brain, we simulated Ebbinghaus’ method on machine learning. We believe that the experimental results in Section 4 could prove that there is a certain correlation between human learning and machine learning, since the machine learning method with timely review and spaced repeat has a faster learning effect, compared with the machine learning without the human-like method.
Ebbinghaus also found that, making good use of the correlations between knowledge is another key point for enhancing learning. We definite these correlations are temporal information in Appendix A. Thereby enhancing the use of temporal information is the key to video detection, natural language processing (NLP), etc. We believe that, the partly repeated input of the stepped sampler enhances the correlation and the temporal information.
3.2 LSTM
The LSTM architecture we used in this paper is to start with a CNN backbone. The CNN backbone has four convolutional layers. The dimension of convolution kernels of each convolutional layer is 32, 64, 128, 256. The size of each convolution kernel is 5 × 5, 3 × 3, 3 × 3, 3×3, the stride of the convolution kernel is 2, and the padding is 0. Each convolutional layer is followed by a batch normalization (BN) (Ioffe & Szegedy, 2015) layer and a ReLU layer. The last part of the CNN model is 3 fully connected (FC) layers, which use dropout function. The dimensions of the 3 FC layers are 1024, 768 and 512 respective-
ly. The LSTM used in the paper is the model existed in PyTorch. The input dimension of LSTM is 512, the hidden layer dimension is 512, and the number of hidden layers is 3. The next is two fully connected (FC) layers followed by dropout function. The dimension of the FC layers is 256. The dropout rate of the CNN backbone and LSTM are both 0.3.
3.3 THE STEPPED SAMPLER
Our experiment is compared with the common sampler. Common sampler in PyTorch (Paszke et al., 2019) include random sampler, weighted random sampler, batch sampler, etc. Batch sampling is nearly the most commonly used. The previous research is to add memory units to deep learning networks, such as RNN, LSTM, etc. Analogous to human learning, an important point is repetition. And the sampler should be an appropriate way to simulate the “repetition”, since the data in each batch can be designed to be input repeatedly. We suppose that, “repetition” is important, not only for human beings, but also for computers. To make computers better use the “repetition”, analogizing the way how we recite words, we propose a “stepped” repetition input method, which is the stepped sampler.
The structure of the proposed stepped sampler is illustrated in Figure 2. It is established on the batch sampler. The stepped sampler divides a batch into some sub-batches. Like human memory, this sampler adopts the principle of adjacent repetition (Crowder, 1968), namely, the back of the previous sub-batch is the same with the front of the next sub-batch.
The structure of the stepped sampler shows that, the input data of different batches is partly duplicated. The repeated input seems to increase the redundancy, but the experimental results show that, with our experimental environment, this method can accelerate the convergence of LSTM model. There is a stride between the previous sub-batch and the next sub-batch. The stride size n can be set manually. We believe that this part repetition enhances the correlation of the input frames, thereby enhancing the temporal information of the input frames, according to our definition of the temporal information in Appendix A. Section 4 describes the comparative experiments on the sampler with different stride size.
3.4 THE ALGORITHM OF THE STEPPED SAMPLER
The stepped sampler is designed on the basis of batch sampler. The algorithm is designed to implement stepped sampling within each batch via the batch sampler. The workflow of the algorithm is as follows: the data first goes through the sequential sampler of PyTorch, then, they are processed to batches by the batch sampler. Finally, the data in each batch are divided into sub-batches with the same strides by the stepped sampler.
As shown in Figure 2, assuming that the iteration number of the stepped sampler in a batch is d, it can be concluded from the figure:
L = m+ n× d (1) It can be deduced that, the stepped sampler iteration number per batch, d is:
d = L−m
n (2)
Equation 2 is used as the iteration number within a batch in Algorithm 1. d is computed by the algorithm when L, m, n is determined. If d is not an integer, PyTorch will round down to ensure that d is an integer. The number of batches is calculated by the framework, and the number of epochs is set manually. The algorithm of the proposed sampler is shown in Algorithm 1. The idea is to implement the stepped sampler in each batch after the sequential sampler and batch sampler of PyTorch. Line 12 of Algorithm 1 is that, after each previous stepped sub-batchs output, the starting coordinate is moved by n (step stride) data from the starting position of the previous sub-batch.
4 RESULTS
4.1 EXPERIMENT SETUP
The system used in the experiment was a workstation with 32 GB CPU RAM and a NVIDIA GeForce 1080ti GPU. The processor was Intel i7 8700, the operating system was Ubuntu 16.04 64 bits. PyTorch version used in the experiment was 1.0.1, Python version was 3.6, Numpy version was 1.20.4, Sklearn version was 0.20.4, and Matplotlib, Pandas, tqdm were implemented as the software environment. The reason why we chose old version PyTorch is, the performance of the old version may not be so powelful, however, the experimental effect may be better, since the contrast of the results may be larger. And since the old version may not have too many functions, it can focus on the factor of “repeat input”, without the interferences from other irrelevant factors.
We searched the relevant literature, and found that there may be no LSTM literature that applies human learning methods to machine learning for now. Therefore, the experiment we designed is a comparison experiment, which is an ordinary CNN-LSTM, with or without the stepped sampler, and the other parameters are all the same. One of the advantages of this is that, it can reduce the influence of other irrelevant factors, and can specifically concentrate on the machine learning results of human learning methods. And the human learning methods is the timely review proposed by Ebbinghaus.
4.2 TRAINING
The detection accuracy evaluation and cross-entropy loss were used for the training of the models. The accuracy evaluation in the experiment used the accuracy score tool in the Sklearn package of Python. The cross-entropy loss used the function of PyTorch. The accuracy and loss were graphically depicted in Figure 4, 5 and 6. The accuracy and loss were computed every epoch. The dataset of
UCF101 was split into training set and test set by a ratio of 3:1. After training, an overall accuracy and loss were computed by the test set, to evaluate the performance of the models. The epoch was set to 150. We used Adam as the optimization algorithm. We experimented different batch sizes and step sizes, which was changing the size of L, m and the step stride n shown in Figure 2.
Our experiment is trained from scratch. Training from scratch may decrease the test accuracy, but it can eliminate interference and focus on the stepped sampler. The learning rate was set to 0.0001. The momentum was set to 0.01. The operators of batch normalization (BN) and ReLU activation were used after each convolutional layer in the CNN backbone. The CNN backbone is not shown in Figure 2. The data transformation was applied to enhance the network. The input frames were transformed into 256× 342 pixels.
4.3 EXPERIMENT RESULTS
Figure 3 are the visualized results.We tested the sampler of batch size 25. Figure 4 presents the experimental results. Each subfigure shows the training loss and test accuracy of the models. The difference of the models is only the sampler, for comparing the results only caused by the sampler. Figure 4 (a) is the model of traditional sampler, which is the sequential sampler and the batch sampler in PyTorch, and the other in Figure 4 are the models of the proposed stepped sampler. Figure 4 (b), (c), (d), (e), (f) are only different from the step stride for comparing. From Figure 4, we consider that step stride 2 (batch size 25, step size 20, in Figure 4 (c)) is the optimal. The training loss in Figure 4 (a) has many jitters, even when the epoch is more than 110, while the training loss in the other subfigures are much smoother, and can converge earlier than the traditional batch sampler model (Figure 4 (a)). Nonetheless, the test accuracy score of the traditional batch sampler model (Figure 4 (a)) is slightly higher. The test accuracy of Figure 4 (a) can be 0.656, while the test accuracy of the model with stride 2 stepped sampler (Figure 4 (c)) can be 0.603. The test accuracy of the models in Figure 4 is shown in Table 1.
From Figure 4, the following could be concluded: a) In the model training, LSTM with the stepped sampler converges faster than LSTM with the traditional sampler, and the convergence effect is better, i.e., there is no large jitter after the drop. b) When the batch size and the step size are fixed, the smaller the step stride was, the worse the detection effect became. Similarly, the larger the step stride was, the worse the detection effect became either. If the batch size and the step size are fixed, the detection effect seems to be a normal distribution of the step stride. c) However, LSTM with the traditional sampler whose batch size is 25 has a higher test accuracy on the test set. Although this value is not much higher than the optimal stepped sampler model (Figure 4 (c)).
From Table 1, it can be concluded that, for the same batch size, the test accuracy of the LSTM with stepped sampler rises faster than the traditional sampler LSTM. This can also be seen in Figure 4. Figure 5 and Figure 6 are the illustration of the traditional LSTM and the stepped sampler LSTM, when the batch sizes are all set to 20,15 respectively. The training loss of Figure 5 (c) and Figure 6 (c) converge faster than Figure 5 (a) and Figure 6 (a) , which denotes that, our method may have a broad-spectrum effect on machine learning. The test accuracy score of Figure 6 (c) is higher than Figure 6 (a), which denotes that, the stepped sampler LSTM may have higher test accuracy than the traditional sampler LSTM, when the batch size is 15, step size is 10, step stride is 5 of the stepped sampler LSTM, and the traditional sampler LSTM is with batch size 15. It can be seen that, there is a large jitter when the epoch is about 100 in Figure 6 (a). Figure 6 (b) and Figure 6 (c) have no large jitter after the epoch is about 60. The training loss of Figure 6 (c) drops faster than Figure 6 (a). The test accuracy of the three models is shown in Table 2. From Figure 6 we can see that, the training loss of the stepped sampler model still converges faster than the traditional sampler model.
In our experiments, most LSTM models with the stepped sampler have a more stable convergence of training loss, compared with the traditional LSTM models with the same batch size. Figure 4 (a) and Figure 6 (a) are the loss curves of the traditional batch sampler, it can be seen that, the loss curves have large jitters after the convergence. Other loss curves in Figure 4 and Figure 6 are stabler after the convergence. The stepped sampler LSTM may have a higher test accuracy than the traditional sampler LSTM, in the same batch size, which can reach the value of 0.639 (Table 2).
Our test uses the shuffle operation. ShuffleNet (Zhang et al., 2018) proves that the shuffle operation can improve the image detection mAP. We analyse that, the reason should be that the shuffle operation can reduce the correlation. According to our definition in Appendix A, the correlation
is the temporal information. Therefore, we consider that the shuffle operation can reduce the temporal information. If the shuffle operation is not used during detection, the frames are sequential. We believe that this continuity will have a certain impact on the model with temporal information. Since the test data is shuffled, there should be less temporal information among the data, the test results may reflect the detection effect better. The literature (Zhou et al., 2018) proves that, shuffle operation makes little impact on UCF101, and we think there would be no disadvantages for us to use the shuffle operation when testing.
4.4 THE TRAINING TIME
A batch is divided into multiple sub-batches might prolong the training time. However, since the repeated data are the same, the training time of a sub-batch is much shorter than an ordinary batch. Therefore, for the total training time, the stepped sampler and the traditional sampler are almost the same. For example, when the batch sizes are all set to 15, the stepped sampler with the step stride of 5 and the traditional batch sampler both take about 60 hours for training with our experimental conditions. Moreover, we believe that the training epoch which the stepped sampler needs might be less than that of the traditional sampler.
Algorithm 1: The stepped sampler Input: Dataset, batch size L, step size m, step stride n, and L > m ≥ n Output: Stepped sub-batch of the dataset Initialize the dataset by Sequential sampler of PyTorch; for Batch = 1, 2, · · · , |len(Batchsampler)| do // use batch sampler to traverse all data
Initialize empty set step batch[]; for idx = 1, 2, · · · , L do // traverse the elements in a batch of the batch sampler
output the idx-th item batch[idx] into step batch[]; idx+ = 1; if len(step batch[]) == m then // when the size of step batch reaches m
return step batch[]; // output the sub-batch Reset step batch[] to empty set; idx = idx−m+ n; // move the coordinate to the next sub-batch by stride n
end end
end
5 DISCUSSION
The experiment is to study the detection effect of the proposed sampler, which simulates one of the human brain memory law, repeating the input, to use the temporal information (we think it is the correlation in/between frames) of videos.
As the data are partly repeated inputted, it may be equivalent to the timely knowledge review of human brain, which strengthens the memory of the LSTM network, and reduces the information forgetting. The process is very similar with human learning, which was revealed by Hermann Ebbinghaus, and it is illustrated in Subsection 3.1. LSTM can selectively memorize the temporal information, which is human-like.
From Figure 4, the repeating times of the stepped sampler is not the more the better, as shown in Figure 4 (b), the stride is 1, and the convergence speed of the model is not improved much. In addition, the repeating times of the stepped sampler is not the less the better. As shown in Figure 4 (f), the convergence speed of the model is even slower. Figure 5 (b) also seems to show this. The phenomenon is the same with human learning. Too much repetitive input and too little repetitive input would not improve the learning effect of human. The experiments seem to verify the similarity between machine learning and human learning. We assume that artificial neural networks have human-like characteristics. What is the best spaced repetition, is still need to be studied. We assume that, it should be “one solution to one issue”, just like human.
Temporal information also seems to have human-like characteristics. Temporal information is the correlation of temporal sequences, analogous to human learning, it is the correlation of knowledge. In human learning, one of the important learning methods is to use the correlation of knowledge, and using the temporal information may also be one of the important learning methods of machine learning.
6 CONCLUSION
We refer to the human memory rules, and propose the stepped sampler, a repeating input method which uses the timely review approach. The timely review approach was proposed by Ebbinghaus, and is used to strengthen human memory and learning. In our experiments, this method has a better promotion on the detection effect of LSTM. The experimental results show that, compared with the traditional sampler, the training loss of the stepped sampler converges faster, and is more stable after the convergence, i.e., there is no large jitter after convergence. The test accuracy of the model with the stepped sampler also reaches a high point faster and is more stable either. When the batch size is 15, the test accuracy of the stepped sampler LSTM is significantly higher than that of the traditional sampler with the same batch size. We analyzed the algorithm of stepped sampler and got several equations. Ebbinghaus also pointed out that utilizing the correlations between knowledge is another key to learn better. We believe that this part repetition of the sampler enhances the correlation of the input frames, thereby enhancing the temporal information of the input frames, from our definition of the temporal information in Appendix A.
We try to use the mathematical language to describe the temporal information of video frames, which is shown in Appendix A. Since these mathematical descriptions do not involve specific artificial neural networks, the parameters of neural network are not added to the equations.
We try to use some human learning methods to study artificial neural networks. Compared with the traditional sampler, the stepped sampler LSTM has a faster learning effect, and has a higher test accuracy under certain parameters. The results show that, there may be a close relationship between biological neural network and artificial neural network, whatever in structure and even in principle. How to improve human learning is different for each individual, and the test accuracy of our experiment may illustrate this point, and we believe that is why not all the stepped sampler LSTM’s test accuracy is higher than the traditional sampler. The attention mechanism (Vaswani et al., 2017) may be also inspired by human learning methods. Transfer learning (Bozinovski & Fulgosi, 1976) is using old knowledge to learn new knowledge, which may be inspired by human learning methods either. We believe that, artificial neural networks seem to have human-like characteristics, and human learning and machine learning seem to have some similarities.
ACKNOWLEDGMENTS
This work was supported in part by the National Natural Science Foundation of China under Grant 61773360.
B THE APPLICATION OF THE EQUATIONS OF THE TEMPORAL
INFORMATION IN THE EXPERIMENT
In this section, we try to apply the equation in Appendix A to analyze the experimental results in Subsection 4.3.
The test accuracy in the experiment is the detection result, and the result is of different frames sent into the model. Thus, the test accuracy can be approximately regarded as the temporal information between frames, i.e., the test accuracy ≈ Tbf . The analysis is as follows. Since the UCF101 dataset is one object in one video, Equation 7 can be transformed into Tbf = TA(nf |pf) = R(Apf ⋂ Anf )
R(Apf ) . The test accuracy in our experiment is
essentially the Intersection over Union (IoU) of the bounding boxes. Therefore,
The test accuracy = IoU = Area(Apf
⋂ Anf )
Area(Apf ⋃ Anf ) =
Area(Apf ⋂ Anf )
area of frame Area(Apf ⋃ Anf )
area of frame
= R(Apf
⋂ Anf )
R(Apf ⋃ Anf )
≈ R(Apf ⋂ Anf )
R(Apf ) = Tbf
(13)
In the above equation, since the position of the objects in the UCF101 dataset does not change much between the previous and the next frames, for the area occupied by the objects, the union set of the objects between the previous and the next frames is approximate the same as the previous frame, which can be approximately regarded as R(Apf ⋃ Anf ) ≈ R(Apf ).
C THE ROLE OF APPENDIX A IN THE PAPER
In Appendix A, one of our basic view points is that temporal information is a kind of correlation. In Subsection 3.1, Ebbinghaus proposed that one of the ways to improve human learning is to make good use of the correlations between knowledge. When this correlation transfers to the machine learning of video, it should be the video temporal information. Therefore, making good use of video temporal information is the key point to video detection. The sampler we proposed can enhance the temporal information.
Moreover, in Subsection 4.3, we analyzed the shuffle operation and also applied this view. Since shuffle can reduce correlations between the input data, it can also reduce the temporal information. Therefore, the interference of the temporal information to the test can also be reduced. The temporal information is undoubtedly helpful for training, since training needs the temporal information to enhance learning. However, if it is also helpful for testing, it will undoubtedly increase the test accuracy. Therefore, we added a shuffle operation to the test, for reducing the test accuracy and make the results more objective.
In Section 5, we applied this view either, and pointed out that the key of knowledge engineering lies in knowledge correlation, and for video detection and language processing, it lies in the temporal information. The above is the reason why we put forward the appendix of video temporal information. | 1. What is the main contribution of the paper in terms of video detection?
2. What are the strengths of the proposed approach, particularly in terms of its inspiration from human memory and training convergence?
3. What are the weaknesses of the paper regarding its performance and limitations in terms of task and network structure scope?
4. Do you have any concerns about the method's usefulness and predictability in other situations? | Summary Of The Paper
Review | Summary Of The Paper
This paper mainly provides a stepped sampling method only for video detection and only adopted in LSTM structure. This idea is inspired from human's repeatedly memory working mechanism. This method can achieve fast convergence during training, smooth the training loss curves and get a good test accuracy when batch size equal to 15.
Review
Strengths: 1, This paper gives very detailly derivation processes, equations, algorithms for the idea. The illustrated graph clearly states its idea.
Weaknesses: 1, Performances: It is useless to discuss the convergence speed without getting the optimal convergence value. For methods comparison in RESULTS part, authors can only achieve better results when batch size equal to 15. And authors don't provide details to illustrate why choose 25 or 15 batch size. The parameters selection seems randomly. The results cannot be predicted and repeated in other situations. 2, As this paper limits its usage and performances under very narrow scope (for certain task Video detection and certain network structure LSTM) and the results are not good and seem to be unpredictable in further reproduction, there is no need to discuss other weakness. The method is useless. |
ICLR | Title
Compressed Predictive Information Coding
Abstract
Unsupervised learning plays an important role in many fields, such as machine learning, data compression, and neuroscience. Compared to static data, methods for extracting low-dimensional structure for dynamic data are lagging. We developed a novel information-theoretic framework, Compressed Predictive Information Coding (CPIC), to extract predictive latent representations from dynamic data. Predictive information quantifies the ability to predict the future of a time series from its past. CPIC selectively projects the past (input) into a low dimensional space that is predictive about the compressed data projected from the future (output). The key insight of our framework is to learn representations by balancing the minimization of compression complexity with maximization of the predictive information in the latent space. We derive tractable variational bounds of the CPIC loss by leveraging bounds on mutual information. The CPIC loss induces the latent space to capture information that is maximally predictive of the future of the data from the past. We demonstrate that introducing stochasticity in the encoder and maximizing the predictive information in latent space contributes to learning more robust latent representations. Furthermore, our variational approaches perform better in mutual information estimation compared with estimates under the Gaussian assumption commonly used. We show numerically in synthetic data that CPIC can recover dynamical systems embedded in noisy observation data with low signal-to-noise ratio. Finally, we demonstrate that CPIC extracts features more predictive of forecasting exogenous variables as well as auto-forecasting in various real datasets compared with other state-of-the-art representation learning models. Together, these results indicate that CPIC will be broadly useful for extracting low-dimensional dynamic structure from high-dimensional, noisy timeseries data.
1 INTRODUCTION
Unsupervised methods play an important role in learning representations that provide insight into data and exploit unlabeled data to improve performance in downstream tasks in diverse application areas Bengio et al. (2013); Chen et al. (2020); Grill et al. (2020); Devlin et al. (2018); Brown et al. (2020); Baevski et al. (2020); Wang et al. (2020). Prior work on unsupervised representation learning can be broadly categorized into generative models such as variational autoencoders(VAEs) (Kingma & Welling, 2013) and generative adversarial networks (GAN) (Goodfellow et al., 2014), discriminative models such as dynamical components analysis (DCA) (Clark et al., 2019), contrastive predictive coding (CPC) (Oord et al., 2018), and deep autoencoding predictive components (DAPC) (Bai et al., 2020). Generative models focus on capturing the joint distribution between representations and inputs, but are usually computationally expensive. On the other hand, discriminative models emphasize capturing the dependence of data structure in the low-dimensional latent space, and are therefore easier to scale to large datasets.
In the case of time series, some representation learning models take advantage of an estimate of mutual information between encoded past (input) and the future (output) (Creutzig & Sprekeler, 2008; Creutzig et al., 2009; Oord et al., 2018). Although previous models utilizing mutual information extract low-dimensional representations, they tend to be sensitive to noise in the observational space. DCA directly makes use of the mutual information between the past and the future (i.e., the predictive information (Bialek et al., 2001)) in a latent representational space that is a linear embedding of the observation data. However, DCA operates under Gaussian assumptions for mutual information
estimation. We propose a novel representation learning framework which is not only robust to noise in the observation space but also alleviates the Gaussian assumption and is thus more flexible.
We formalize our problem in terms of data generated from a stationary dynamical system and propose an information-theoretic objective function for Compressed Predictive Information Coding (CPIC). Instead of leveraging the information bottleneck (IB) objective directly as in Creutzig & Sprekeler (2008) and Creutzig et al. (2009), where the past latent representation is directly used to predict future observations, we predict the compressed future observations filtered by the encoder. It is because that in the time series setting, future observations are noisy, and treating them as labels is not insightful. Specifically, our target is to extract latent representation which can better predict future underlying dynamics. Since the compressed future observations are assumed to only retain the underlying dynamics, better compression thus contributes to extracting better dynamical representation. In addition, inspired by Clark et al. (2019) and Bai et al. (2020), we extend the prediction from single input to a window of inputs to handle high order predictive information.
Moreover, instead of directly estimating the objective information with Gaussian assumption (Creutzig & Sprekeler, 2008; Creutzig et al., 2009; Clark et al., 2019; Bai et al., 2020), we developed variational bounds and a tractable end-to-end training framework based on the neural estimator of mutual information studied in Poole et al. (2019). Note that our inference first leverages the variational boundary technique for self-supervised learning on the time series data. Since it alleviates the Gaussian assumption, it is applicable to a much larger class of dynamical systems.
In CPIC, we also demonstrate that introducing stochasticity into either a linear or nonlinear encoder robustly contributes to numerically better representations in different tasks. In particular, we illustrate that CPIC can recover trajectories of a chaotic dynamical system embedded in highdimensional noisy observations with low signal-to-noise ratios in synthetic data. Furthermore, we conduct numerical experiments on four real-world datasets with different goals. In two neuroscience datasets, monkey motor cortex (M1) and rat dorsal hippocampus (HC), compared with the state-ofthe-art methods, we show that the latent representations extracted from CPIC have better forecasting accuracy for the exogenous variables of the monkey’s future hand position for M1, and for the rat’s future position for HC. In two other real datasets, historical hourly weather temperature data (TEMP) and motion sensor data (MS), we show that latent representations extracted by CPIC have better forecasting accuracy of the future of those time series than other methods. In summary, the primary contributions of our paper are as follows:
• We developed a novel information-theoretic self-supervised learning framework, Compressed Predictive Information Coding (CPIC), which extracts low-dimensional latent representation from time series. CPIC maximizes the predictive information in the latent space while minimizing the compression complexity.
• We introduced the stochastic encoder structure where we encode inputs into stochastic representations to handle uncertainty and contribute to better representations.
• Based on prior works, we derived the variational bounds of the CPIC’s objective function and a tractable, end-to-end training procedure. Since our inference alleviates the Gaussian assumption common to other methods, it is applicable to a much larger class of dynamical systems. Moreover, to the best of our knowledge, our inference is the first to leverage the variational boundary technique for self-supervised learning on time series data.
• We demonstrated that, compared with the other unsupervised based methods, CPIC more robustly recovers latent dynamics in dynamical system with low signal-to-noise ratio in synthetic experiments, and extracts more predictive features for downstream tasks in various real datasets.
2 RELATED WORK
Mutual information (MI) plays an important role in estimating the relationship between pairs of variables. It is a reparameterization-invariant measure of dependency:
I(X,Y ) = Ep(x,y) [ log
p(x|y) p(x)
] (1)
It is used in computational neuroscience (Dimitrov et al., 2011), visual representation learning (Chen et al., 2020), natural language processing (Oord et al., 2018) and bioinformatics (Lachmann et al., 2016). In representation learning, the mutual information between inputs and representations is used to quantify the quality of the representation and is also closely related to reconstruction error in generative models (Kingma & Welling, 2013; Makhzani et al., 2015). Estimating mutual information is computationally and statistically challenging except in two cases: discrete data, as in Tishby et al. (2000) and Gaussian data, as in Chechik et al. (2005). However, these assumptions both severely constrain the class of learnable models (Alemi et al., 2016). Recent works leverage deep learning models to obtain both differentiable and scalable MI estimation (Belghazi et al., 2018; Nguyen et al., 2010; Oord et al., 2018; Alemi et al., 2016; Poole et al., 2019; Cheng et al., 2020).
In terms of representation learning in time series, Wiskott & Sejnowski (2002); Turner & Sahani (2007) targeted slowly varying features, Creutzig & Sprekeler (2008) utilized the information bottleneck (IB) method (Tishby et al., 2000) and developed an information-theoretic objective function. Creutzig et al. (2009) proposed an alternative objective function based on a specific state-space model. Recently, Oord et al. (2018) proposed CPC to extract dynamic information based on an autoregressive model on representations and contrastive loss on predictions. Clark et al. (2019); Bai et al. (2020) proposed unsupervised learning approach to extract low-dimensional representation with maximal predictive information(PI). All of the above unsupervised representation learning models, except for CPC, assume the data to be Gaussian, which may be not realistic, especially when applied to neuroscience datasets (O’Doherty et al., 2017; Glaser et al., 2020), given the nonGaussianity of neuronal activity. Here, we leverage recently introduced neural estimation of mutual information to construct upper bounds of the CPIC objective and develop an end-to-end training procedure. CPIC enables generalization beyond the Gaussian case and autoregressive models.
Recently, deep encoder networks are leveraged to model nonlinear relations between latent representations and observed data in time series (Chen et al., 2020; Bai et al., 2020; He et al., 2020). However, use of complicated nonlinear encoders induced hinders computational efficiency (Wang et al., 2019). CPIC proposes an efficient representation learning framework for time series that encodes data with maximal predictive information. We also note that there exists several works on the time series modeling from generative modeling perspective. Initially, Fabius & Van Amersfoort (2014) leveraged the recurrent neural network with variational autoencoder to model time series data. Frigola et al. (2014) proposed variational Gaussian-process state-space model. Meng et al. (2021) proposed variational structured Gaussian-process regression network which can efficiently handle more complicated relationships in time series. Most generative modeling inference would depend on the length of time series, while the inference of CPIC depends on the window size T , which is more scalable for long time series.
3 COMPRESSED PREDICTIVE INFORMATION CODING
The main intuition behind Compressed Predictive Information Coding (CPIC) is to extract low dimensional representations with minimal compression complexity and maximal dynamical structure. Specifically, CPIC first discards low-level information that is not relevant for dynamic prediction and noise that is more local by minimizing compression complexity (i.e., mutual information) between inputs and representations to improve model generalization. Second, CPIC maximizes the predictive information in the latent space of compressed representations.
Compared with Clark et al. (2019); Bai et al. (2020), CPIC first utilizes stochastic encoder to handle uncertainty of representations, which contributes to more robust representations, and also relieves the Gaussian assumption by constructing bounds of mutual information based on neural estimations. In more detail, instead of employing a deterministic linear mapping function as the encoder to compress data as in Clark et al. (2019), CPIC takes advantage of a stochastic linear or nonlinear mapping function. Given inputs, the stochastic representation follows Gaussian distributions, with means and variances encoded from any neural network structure. A nonlinear CPIC utilizes a stochastic nonlinear encoder which is composed of a nonlinear mean encoder and a linear variance encoder, while a linear CPIC utilizes a stochastic linear encoder which is composed of a linear mean encoder and a linear variance encoder. Note that stochastic representations conditioned on inputs are parameterized as a conditional Gaussian distribution, but the marginal distribution of the representation is a mixture of Gaussian distribution, which is widely recognized as universal approximator of densities.
On the other hand, avoiding the Gaussian assumption on mutual information (Creutzig & Sprekeler, 2008; Creutzig et al., 2009; Clark et al., 2019; Bai et al., 2020), CPIC leverages neural estimations of mutual information. Specifically, we propose differentiable and scalable bounds of the CPIC objective via variational inference, which enables end-to-end training.
Formally, let X = {xt}, xt ∈ RN be a stationary, discrete time series, and let Xpast = (x−T+1, . . . , x0) and Xfuture = (x1, . . . , xT ) denote consecutive past and future windows of length T. Then both past and future data are compressed into past and future representations denoted as Ypast = (y−T+1, . . . , y0) and Yfuture = (y1, . . . , yT ) with embedding dimension size Q. Similar to the information bottleneck (IB) (Tishby et al., 2000), the CPIC objective contains a trade-off between two factors. The first seeks to minimize the compression complexity and the second to maximize the predictive information in the latent (representation) space. Note that when the encoder is deterministic the compression complexity is deprecated and when the encoder is stochastic the complexity is measured by the mutual information between representations and inputs. In the CPIC objective, the trade-off weight β > 0 dictates the balance between the compression and predictive information terms:
min ψ
L, where L ≡ β(I(Xpast;Ypast) + I(Xfuture;Yfuture))− I(Ypast;Yfuture) (2)
where ψ refer to the model parameters which encode inputs X to latent variables Y . Larger β promotes a more compact mapping and thus benefits model generalization, while smaller β leads to more predictive information in the latent space on training data. This objective function is visualized in Figure 1, where inputs X are encoded into latent space as Y via tractable encoders and the dynamics of Y are learned in a model-free manner.
The encoder p(Y |X) could be implemented by fitting deep neural networks (Alemi et al., 2016) to encode data X . Instead, CPIC takes an approach similar to VAEs (Kingma & Welling, 2013), in that it encodes data into stochastic representations. In particular, CPIC employs a stochastic encoder (genc in Figure 1) to compress input xt into yt as
yt|xt ∼ N (µt, diag(σ2t )) , (3)
for each time stamp t. The mean of yt is given by µt = gEncoderµ (xt), whereas the variance arises from σt = gEncoderσ (xt).
Encoders gEncoderµ and g Encoder σ can be any nonlinear mapping and is usually modeled using neural network architectures. We use a twolayer perceptron with ReLU activation function (Agarap, 2018) for a nonlinear mapping. In terms of a linear CPIC, we specify the mean of representation as µt = uTxt. In both linear and nonlinear CPIC setting, if σt = 0, the stochastic encoder reduces to a deterministic encoder.
We extend single input to multiple inputs in the CPIC framework in terms of a specified window size T . The selection of window size is discussed in Appendix A. Due to the stationary assumption, the relation between past/future
blocks of input data X(−T ), X(T ) ∈ RN×T and encoded data Y (−T ), Y (T ) ∈ RQ×T are equivalent, pX(−T ),Y (−T ) = pX(T ),Y (T ). Note that −T and T indexes to past and future T data. Without loss of generality, the compression relation can be expressed as Y (T ) = gEncoderµ (X(T )) + ξ(T ), where ξ(T ) ∈ N (0, blockdiag(diag(σ21), . . . , diag(σ2T )) and noise standard deviation σt = gEncoderσ (xt).
4 VARIATIONAL BOUNDS OF COMPRESSED PREDICTIVE INFORMATION CODING
In CPIC, since data X are stationary, the mutual information between the input data and the compressed data for the past is equivalent to that for the future I(X(−T );Y (−T )) = I(X(T );Y (T )). Therefore, the objective of CPIC can be rewritten as
minL = βI(X(T );Y (T ))− I(Y (−T );Y (T )) . (4)
We developed the variational upper bounds on mutual information for the compression complexity I(X(T );Y (T )) and lower bounds on mutual information for the predictive information I(Y (−T );Y (T )).
4.1 UPPER BOUNDS OF COMPRESSION COMPLEXITY
In the section, we derived a tractable variational upper bound (VUB) depending on a single sample and a leave-one-out upper bound (L1Out) (Poole et al., 2019) depending on multiple samples.
Theorem 1 By introducing a variational approximation r(y(T )) to the marginal distribution p(y(T )), a tractable variational upper bound of mutual information I(X(T );Y (T )) is derived as IVUB(X(T );Y (T )) = EX(T ) [ KL(p(y(T )|x(T )), r(y(T ))) ] .
Theorem 2 By utilizing a Monte Carlo approximation for variational distribution r(y(T )), the L1Out upper bound of mutual information I(X(T );Y (T )) is derived as IL1Out(X(T );Y (T )) = E [ 1 S ∑S i=1 [ log p(y(T )i|x(T )i)1 S−1 ∑ j ̸=i p(y(T )i|x(T )j) ]] , where S is the sample size.
The derivation details are in Appendix B and C. In practice, the L1Out bound depends on the sample size S and may suffer from numerical instability. Thus, we would like to choose the sample size S as large as possible. In general scenarios where p(y(T )|x(T )) is intractable, Cheng et al. (2020) proposed a variational version of VUB and L1Out by using a neural network to approximate the condition distribution p(y(T )|x(T )). Since the conditional distribution p(y(T )|x(T )) is parameterized as a known stochastic/deterministic encoder in CPIC, those variational versions are not taken into consideration.
4.2 LOWER BOUNDS OF PREDICTIVE INFORMATION
For the predictive information (PI), we derived lower bounds of I(Y (−T );Y (T )) using results in Agakov (2004); Alemi et al. (2016); Poole et al. (2019). In particular, we derived tractable unnormalized Barber and Agakov (TUBA) (Barber & Agakov, 2003) lower bounds depending on a single sample and an infoNCE lower bound (Oord et al., 2018) depending on multi samples. All derivation details are discussed in Appendix D, E and F.
Theorem 3 We derived a lower bound on predictive information (PI) I(Y(-T); Y(T)) as IV LB(Y (−T );Y (T )) = H(Y (T )) + Ep(y(−T ),y(T ))[log q(y(T )|y(−T ))], where q(y(T )|y(−T )) is a variational conditional distribution.
However, this lower bound requires a tractable decoder for the conditional distribution q(y(T )|y(−T )) (Alemi et al., 2016). Alternatively we derived a TUBA lower bound (Barber & Agakov, 2003) which is free of the parametrization of decoder.
Theorem 4 By introducing a differentiable critic function f(x, y) and a baseline function a(y(T )) defined in Appendix E, the TUBA lower bound of predictive information is derived as ITUBA(Y (−T ), Y (T )) = Ep(y(−T ),y(T ))[f̃(y(−T ), y(T ))]−log ( Ep(y(−T ))p(y(T ))[ef̃(y(−T ),y(T ))] ) where f̃(y(−T ), y(T )) = f(y(−T ), y(T ))− log(a(y(T ))).
Different forms of the baseline function lead to different neural estimators in the literature such as MINE (Belghazi et al., 2018) and NWJ (Nguyen et al., 2010). On the other hand, all TUBA based
estimators have high variance due to the high variance of f(x, y). Oord et al. (2018) proposed a low-variance MI estimator based on noise-contrastive estimation called InfoNCE. Moreover, there exists other differentiable mutual information estimator including SMILE (Song & Ermon, 2019) and Echo noise estimator (Brekelmans et al., 2019).
Theorem 5 In the CPIC setting, the InfoNCE lower bound of predictive information is derived as
IinfoNCE(Y (−T );Y (T )) = E
[ 1
S S∑ i=1 log ef(y(−T )i,y(T )i) 1 S ∑S j=1 e f(y(−T )i,y(T )j)
] (5)
The expectation is over S independent samples from the joint distribution: p(y(−T ), y(T )) following Markov Chain rule in Figure 1 such as p(y((−T ), y(T )) =∫ p(x(−T ), x(T ))p(y(−T )|x(−T ))p(y(T )|x(T ))dx(−T )x(T ).
4.3 VARIATIONAL BOUNDS OF CPIC
We propose two classes of upper bounds of CPIC based on whether the bounds depend on a single sample or multiple samples. According to the uni-sample and multi-sample bounds derived in Section 4.1 and Section 4.2, we name the first class as uni-sample upper bounds, which take the VUB upper bound of mutual information for the complexity of data compression I(X(T ), Y (T )) and the TUBA as the lower bound of predictive information in equation 14. Thus we have
LUNI = βKL(p(y(T )|x(T )), r(y(T )))− ITUBA(Y (−T ), Y (T )) . (6)
Notice that by choosing different baseline functions, the TUBA lower bound would be equivalent to different mutual information estimator such as MINE and NWJ. The second class is named as multi-sample upper bound, which take advantage of the noise-contrastive estimation approach. The multi-sample upper bound is expressed as
LMUL = βIL1Out(X(T );Y (T ))− IinfoNCE(Y (−T );Y (T )) . (7)
Two main differences exist between these classes of upper bounds. First, the performance of multisample upper bound depend on batch size while uni-sample upper bounds do not, so when computational budgets do not allow large batch size in training, uni-sample upper bounds may be preferred in training. Secondly, multi-sample upper bound has lower variance than uni-sample upper bounds. Thus, they have different strengths and weaknesses depending on the context. We evaluated the performance of those variational bounds of CPIC in terms of the reconstruction performance in synthetic experiments in Appendix G, and find that with sufficiently large batch size, the multi-sample upper bound would outperform most of the uni-sample upper bounds. Thus, without further specification, we choose the multi-sample upper bound as the variational bounds of CPIC objective in this work. Furthermore, we classify the upper bounds into stochastic and deterministic versions by whether we employ a deterministic or stochastic encoder. Notice that when choosing the deterministic encoder, the compression complexity term (first term) in equation 6 and equation 7 are constant.
5 NUMERICAL EXPERIMENTS
In this section, we demonstrate the superior performance of CPIC in both synthetic and real data experiments. We first examine the reconstruction performance of CPIC in noisy observations of a dynamical system (the Lorenz Attractor). The results show CPIC better recovers the latent trajectories from noisy high dimensional observations. Moreover, we demonstrate that maximizing the predictive information(PI) in the compressed latent space is more effective than maximizing PI between latent and observation space as in Creutzig & Sprekeler (2008); Creutzig et al. (2009), and also demonstrate the benefits of the stochastic representation over the deterministic representation. Secondly, we demonstrate better predictive performance of the representation evaluated by linear forecasting. The motivation for using linear forecasting models is that good representations contribute to disentangling complex data in a linearly accessible way (Clark et al., 2019). Specifically, we extract latent representations and then conduct forecasting tasks given the inferred representations
on two neuroscience datasets and two other real datasets. The two neuroscience datasets are multineuronal recordings from the hippocampus (HC) while rats navigate a maze (Glaser et al., 2020) and multi-neuronal recordings from primary motor cortex (M1) during a reaching task for monkeys (O’Doherty et al., 2017). The two other real datasets are multi-city temperature data (TEMP) from 30 cities over several years (Gene, 2017) and 12 variables from an accelerater, gyroscope, and gravity motion sensor (MS) recording human kinematics (Malekzadeh et al., 2018). The forecasting tasks for the neuroscience data sets is to predict the future of the relevant exogenous variables from the past neural data, while the forecasting task for the other datasets is to predict the future of those time-series from their past. The results illustrate that CPIC has better predictive performance on these forecasting tasks compared with existing methods.
5.1 SYNTHETIC EXPERIMENT WITH NOISY LORENZ ATTRACTOR
The Lorenz attractor is a 3D time series that are realizations of the Lorenz dynamical system (Pchelintsev, 2014). It describes a three dimensional flow generated as:
dx dt = σ(y − x), dy dt = f1(ρ− z)− y, dz dt = xy − γz . (8)
Lorenz sets the values σ = 10, ρ = 8/3 and γ = 28 to exhibit chaotic behavior, as done in recent works (She & Wu, 2020; Clark et al., 2019; Zhao & Park, 2017; Linderman et al., 2017). We simulated the trajectories from the Lorenz dynamical system and show them in the left-top panel in Figure 2. We then mapped the 3D latent signals to 30D lifted observations with a random linear embedding in the left-middle panel and add spatially anisotropic Gaussian noise on the 30D lifted observations in the left-bottom panel. The noises are generated according to different signal-to-noise ratios (SNRs), where SNR is defined by the ratio of the variance of the first principle components of dynamics and noise as in Clark et al. (2019). Specifically, we utilized 10 different SNR levels spaced evenly on a log (base 10) scale between [-3, -1] and corrupt the 30D lifted observations with noise corresponding to different SNR levels. Details of the simulation are available in Appendix G Finally,
we deploy different variants of CPICs to recover the true 3D dynamics from different corrupted 30D lifted observations with different SNR levels, and compare the accuracy of recovering the underlying Lorenz attractor time-series.
We aligned the inferred latent trajectory with the true 3D dynamics with optimal linear mapping due to the reparameterization-invariant measure of latent trajectories. We validated the reconstruction performance based on theR2 regression score of the extracted vs. true trajectories. We first compare the reconstruction performance on different variational bounds of CPIC with the latent dimension size Q = 3 and the time window size T = 4, and find that multi-sample upper bound outperforms uni-sample upper bounds for almost all of the 10 SNR levels. Thus, we recommend the multi-sample upper bound for CPIC in practice and use that for further results. We also find that, compared to DCA (Clark et al., 2019) and CPC (Oord et al., 2018) CPIC is more robust to noise and thus better extracts the true latent trajectory from the noisy high dimensional observations. The detailed results are reported in Appendix H
In order to demonstrate the benefits of introducing stochasticity in the encoder and maximizing the predictive information in latent space, we considered four variants of CPICs: with stochastic or deterministic encoder, and with predictive information in latent space or between latent and observation space. All four variants of CPIC models utilize the latent dimension size Q = 3 and the time window size T = 4. For each model and each SNR level, we run 100 replicates with random initializations. We show the aligned latent trajectories inferred from corrupted lifted observation for high, intermediate and low SNR (0.001, 0.01, 0.1) levels of noise with the median R2 scores across 100 replicates in Figure 2. The point-wise distances between the recovered dynamics and the ground-truth dynamics are encoded in the colors from blue to red, corresponding to short to long distance. For high SNR (SNR = 0.1, topright), all models did a good job of recovering the Lorenz dynamics though the stochastic CPIC with predictive information on latent space had larger R2 than others. For intermediate SNR (SNR = 0.008, middle-right), we see that stochastic CPICs performs much bet-
ter than the deterministic CPICs. Finally, as the SNR gets lower (SNR = 0.001, bottom-right) all methods perform poorly, but we note that, numerically, considering predictive information in latent space is much better than that between latent and observation space.
To more thoroughly characterize the benefits of stochastic encoding and PI in the latent space, we examined the mean of R2 scores for the four variants on each level of SNR across N = 10 and N = 100 replicates in the top row of Figure 3. It shows that the CPIC with stochastic representations and PI in latent space robustly outperforms other variants on average. We also report the best R2 scores for the four variants in the sense that we report the R2 score for the model with the smallest training loss across N runs. The bottom row of Figure 3 shows that CPIC with stochastic representation and PI in latent space achieves better reconstruction and robustness to noise than other variants, especially when the number of runs N is small. Even when N is large, stochastic CPIC with PI in latent space greatly outperforms others when the noise level is high. We note that in the case of high-dimensional noisy observations with large numbers of samples common in many modern real-world time series datasets, CPICs robustness to noise and capacity to achieve good results in a small number of runs is a clear advantage. Moveover, we displayed the quantile anaylsis of the R2 scores in Appendix I with consistent result.
5.2 REAL EXPERIMENTS WITH DIVERSE FORECASTING TASKS
In this section, we show that latent representations extracted by stochastic CPIC perform better in the downstream forecasting tasks on four real datasets. We compared stochastic CPIC with contrastive predictive coding (CPC) (Oord et al., 2018), PCA, SFA (Wiskott & Sejnowski, 2002), DCA (Clark et al., 2019) and deterministic CPIC. As for CPC, we use a linear encoder for fair comparison. In addition, we compared the result from CPCs and CPICs with nonlinear encoder in which the linear mean encoder is replaced by a multi-layer perceptron. For each model, we extract the latent representations (conditional mean) and conduct prediction tasks on the relevant exogenous variable at a future time step for the neural datasets. For example, for the M1 dataset, we extract a consecutive 3-length window representation of multi-neuronal spiking activity to predict the monkey’s arm position in a future time step which is lag time stamps away. The details of experiments are available in Appendix J. Neuroscientists often want to interpret latent representations of data to gain insight into the processes that generate the observed data. Thus, we used linear regression 1 to predict exogenous variables, with the intuition that a simple (i.e., linear) prediction model will only be sensitive to the structure in the data that is easiest to interpret as in (Yu et al., 2008; Pandarinath et al., 2018; Clark et al., 2019). Furthermore, the neuroscience data sets (M1 and HC) present extremely challenging settings for prediction of the exogenous variables due to severe experimental undersampling of neurons due to technical limitations, as well as sizeable noise magnitudes. For these tasks, R2 regression score is used as the evaluation metric to measure the forecasting performance. Four datasets are split into 4:1 train and test data and the forecasting task considered three different lag values (5, 10, and 15). For DCA and deterministic/stochastic CPICs, we took three different window sizes T = 1, 2, 3 and report the best R2 scores. Table 1 reports all R2 scores and demonstrates that our stochastic CPIC outperforms all other models except for the case for Temp data with forecasting at lag 15.
6 CONCLUDING REMARKS
We developed a novel information-theoretic framework, Compressed Predictive Information Coding, to extract representations in sequential data. CPIC balances the maximization of the predictive information in latent space with the minimization of the compression complexity of the latent representation. We leveraged stochastic representations by employing a stochastic encoder and developed variational bounds of the CPIC objective function. We demonstrated that CPIC extracts more accurate low-dimensional latent dynamics and more useful representations that have better forecasting performance in diverse downstream tasks in four real-world datasets. Together, these results indicate that CPIC will yield similar improvements in other real-world scenarios. Moreover, we note that in most real datasets, using nonlinear CPIC would lead to better representation in terms of prediction performance than linear CPIC.
1https://scikit-learn.org/stable/modules/linear model.html
A SELECTION OF WINDOW SIZE
Selecting optimal window size T is important for the downstream use of the dynamics. Poor selection of T may cause aliasing artifacts. In general, we nee to select it by cross validation. Furthermore, we can make plots of the predictive information as a function of both window size T and the embedding dimension Q as diagnostic tools.
B DERIVATION OF IV UB
Directly estimating the compression complexity is intractable, because I(X(T );Y (T )) := EX(T ) [ KL(p(y(T )|x(T )), p(y(T ))) ] in which the population distribution p(y(T )) is unknown. Thus we introduce a variational approximation to the marginal distribution of encoded inputs
p(y(T )), denoted as r(y(T )). Due to the non-negativity of the Kullback-Leibler (KL) divergence, the variational upper bound (VUB) is derived as
I(X(T );Y (T )) = EX(T ) [ KL(p(y(T )|x(T )), r(y(T ))) ] − KL(p(y(T )), r(y(T )))
≤ EX(T ) [ KL(p(y(T )|x(T )), r(y(T ))) ] = IVUB(X(T );Y (T )) . (9)
C DERIVATION OF IL1Out
Generally, learning r(y(T )) was recognised as the distribution density estimation problem (Silverman, 2018), which is challenging. In this setting, the variational distribution r(y(T )) is assumed to be learnable, and thus estimating the variational upper bound is tractable. In particular, Alemi et al. (2016) fixed r(y(T )) as a standard normal distribution, leading to high-bias in MI estimation. Recently, Poole et al. (2019) utilized a Monte Carlo approximation for variational distribution. In our case, with S sample pairs (x(T )i, y(T )i)Si=1, ri(y(T )) = 1 S−1 ∑ j ̸=i p(y(T )|x(T )j) ≈ p(y(T )) and the L1Out is derived as below:
IL1Out(X(T );Y (T )) = E
[ 1
S S∑ i=1
[ log
p(y(T )i|x(T )i) 1 S−1 ∑ j ̸=i p(y(T )i|x(T )j)
]] . (10)
D DERIVATION OF IV LB
Similar to Agakov (2004), we replace the intractable conditional distribution p(y(T )|y(−T )) with a tractable optimization problem over a variational conditional distribution q(y(T )|y(−T )). It yields a lower bound on PI due to the non-negativity of the KL divergence:
I(Y (−T );Y (T )) ≥ H(Y (T )) + Ep(y(−T ),y(T ))[log q(y(T )|y(−T ))] (11)
where H(Y ) is the differential entropy of variable Y and this bound is tight if and only if q(y(T )|y(−T )) = p(y(T )|y(−T )), suggesting that the second term in equation 11 equals the negative conditional entropy −H(Y (T )|Y (−T )). However the variational lower bound requires a tractable decoder for the conditional q(y|x). Alternatively, by considering an energy-based variational family for conditional distribution
The conditional expectation in equation 11 can be estimated using Monte Carlo sampling based on the encoded data distribution p(y(−T ), y(T )). And encoded data are sampled by introducing the augmented data x(−T ) and x(T ) and marginalizing them out as
p(y((−T ), y(T )) = ∫ p(x(−T ), x(T ))p(y(−T )|x(−T ))p(y(T )|x(T ))dx(−T )x(T ) (12)
according to the Markov chain proposed in Figure 1.
E DERIVATION OF ITUBA
According to Poole et al. (2019), by considering an energy-based variational family to express and conditional distribution q(y(T )|y(−T )):
q(y(T )|y(−T )) = p(y(T ))e f(y(T ),y(−T ))
Z(y(−T )) (13)
where f(x, y) is a differentiable critic function, Z(y(−T )) = Ep(y(T ) [ ef(y(T ),y(−T )) ] is a partition function, and introducing a baseline function a(y(T )), we derived a tractable TUBA lower bound (Barber & Agakov, 2003) of the predictive information as:
I(Y (−T ), Y (T )) ≥ Ep(y(−T ),y(T ))[f̃(y(−T ), y(T ))]− log ( Ep(y(−T ))p(y(T ))[ef̃(y(−T ),y(T ))] ) = ITUBA(Y (−T ), Y (T )) (14)
where f̃(y(−T ), y(T )) = f(y(−T ), y(T ))− log(a(y(T ))) is treated as an updated critic function. Notice that different choices of baseline functions lead to different mutual information estimators.
When a(y(T )) = 1, it leads to mutual information neural estimator (MINE) (Belghazi et al., 2018); when a(y(T )) = Z(y(T )), it leads to the lower bound proposed in Donsker & Varadhan (1975) (DV) and when a(y(T )) = e, it recovers the lower bound in Nguyen et al. (2010) (NWJ) also known as f-GAN (Nowozin et al., 2016) and MINE-f (Belghazi et al., 2018). In general, the critic function f(x, y) and the log baseline function a(y) are usually parameterized by neural networks (Oord et al., 2018; Belghazi et al., 2018): Oord et al. (2018) used a separable critic function f(x, y) = hθ(x)
T gθ(y), while Belghazi et al. (2018) used a joint critic function f(x, y) = fθ(x, y), and Poole et al. (2019) claimed that joint critic function generally performs better than separable critic function but scale poorly with batch size.
F DERIVATION OF IinfoNCE
The derivation of infoNCE in our CPIC setting is trivial by treating Y (−T ) and Y (T ) as the input and output in the infoNCE formula from the CPC setting (Oord et al., 2018).
G DETAILS OF SIMULATION
In this section, we first generated the 3D latent signals according to the Lorenz dynamic system 8 denoted as X ∈ R3×T . We calculated the largest eigenvalue of the covariance matrix of X as dynamic variance denoted as σ2dynamics, and the noise variance is σ 2 noise = σ 2 dynamics/SNR where SNR is signal-to-noise ratio. Then we randomly generate a semi orthogonal matrix V ∈ R30×3. Then we generated the true 30D signal V X embedded with additive spatially structured white noise, where the noise subspace Vnoise is generated with median principle angles with respect to dynamics subspaces V . The noise covariance is generated via Σnoise with the largest eigenvalue σ2noise, and then we generate the noisy signal at the nth dimension by [Ynoisy]n ∼ N (vTnX,Σnoise), n = 1, . . . 30.
H MODEL COMPARISON IN TERMS OF R2 REGRESSION SCORE IN THE NOISY LORENZ ATTRACTOR EXPERIMENT
In this section, theR2 regression scores for CPC, DCA, deterministic & stochastic CPICs (three unisample upper bounds in terms of NWJ, MINE, TUBA, and one multi-sample upper bound) for all ten different SNRs are reported in Table 2. It shows that stochastic CPIC with multi-sample upper bound outperforms other approaches in majority of SNRs. It also shows that that CPIC is most robust to the noisy data and thus detect best latent trajectories from noisy observation compared with CPC and DCA.
We also show the aligned latent trajectories inferred from corrupted lifted observation for high, intermediate and low SNR (0.001, 0.01, 0.1) levels of noise with the median R2 scores across 100 replicates for PCA and DCA (as the extension of Figure 2) in Figure 4. The point-wise distances between the recovered dynamics and the ground-truth dynamics are encoded in the colors from blue to red, corresponding to short to long distance. It show that stochastic CPIC outperforms both PCA and DCA.
I COMPARISON ON R2 SCORES OF LATENT DYNAMICS REGRESSION FOR NOISY LORENZ ATTRACTOR IN TERMS OF QUANTILE ANALYSIS
We displayed the medium performance (with the inter-quantile range as the error bars) of R2 scores of latent dynamics regression for noisy Lorenz attractor in Figure 5.
J DETAILS OF REAL-WORLD EXPERIMENTS
The four real data are Monkey motor cortical dataset (M1), Rat hippocampal data (HC), Temperature dataset (Temp) and Accelerate dataset (MS).
J.1 MONKEY MOTOR CORTICAL DATASET
O’Doherty et al. (2017) released multi-electrode spiking data for both M1 and S1 for two monkeys during a continuous grid-based reaching task. We used M1 data from the subject “Indy” (specifically, we used the file “indy 20160627 01.mat”). We discarded single units with fewer than 5,000 spikes, leaving 109 units. We binned the spikes into non-overlapping bins , square-root transformed the data and mean-centered the data using a sliding window 30 s in width.
𝑁 = 10 𝑁 = 100
J.2 RAT HIPPOCAMPAL DATA
Glaser et al. (2020) released the original data. The data consist of 93 minutes of extracellular recordings from layer CA1 of dorsal hippocampus while a rat chased rewards on a square platform. We discarded single units with fewer than 10 spikes, leaving 55 units. We binned the spikes into nonoverlapping 50 ms bins, then square-root transformed the data.
J.3 TEMPERATURE DATASET
The temperature dataset consists of hourly temperature data for 30 U.S. cities over a period of 7 years from OpenWeatherMap.org. We downsampled the data by a factor of 24 to obtain daily temperatures.
J.4 ACCELEROMETER DATASET
Malekzadeh et al. (2018) released accelerometer data which records roll, pitch, yaw, gravity x, y, z, rotation x, y, z and acceleration x, y, z for a total of 12 kinematic variables. The sampling rate is 50 Hz. We used the file “sub 19.csv” from “A DeviceMotion data.zip”.
J.5 FORECASTING TASK
The forecasting task is the same in Clark et al. (2019). We use the extracted consecutive 3-length window representation of endogenous data to forecast the future relevant exogenous variables at log n. In M1 and HC, the endogenous variables are processed spiking data, and the exogenous variables are location data. In Temp and MS, we assume endogenous variables and exogenous variables are the same, 30 U.S. cities’ hourly temperature for Temp data and 12 kinematic variables for MS data. | 1. What is the main contribution of the paper regarding learning representations of dynamic systems?
2. What are the strengths and weaknesses of the proposed method, particularly in its connection to established literature and potential applications?
3. Do you have any concerns or suggestions regarding the experimental results and their interpretation?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
5. Are there any questions or suggestions regarding the paper's comparisons with other works, specifically those using differentiable mutual information estimators? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
The authors propose a learning criterion and numerous bounds for learning representations of dynamic systems that are at once maximally compressed (minimal mutual information with their original representation, i.e., minimal rate) while being maximally informative a future time point (or, due to symmetry, a previous timepoint). This leads to learned encodings which are, up to the SNR and trade-off parameter
β
, reflective of the dynamical system.
Strengths And Weaknesses
Strengths:
The method is grounded in established literature (information bottleneck and variational information bottleneck, and more generally Rate-Distortion theory), yet contributes a novel criterion. Secondly, the recognition that the stationarity assumption reduces the learning criterion to a two term loss function, which is, in structure, very similar to many other information trade-offs in the literature.
Multiple bounds are explored for the mutual information minimization/maximization.
Experimental results are generally well done, modulo exact generation details.
Weaknesses:
Allowing time-windows T, for periodic or pseudoperiodic phenomena with period T' we might observe aliasing artifacts. While this remains stationary in a global sense, will such artifacts impede dynamics. Moreover, will the interpretation or downstream use of those dynamics be impeded by this induced false beat frequency at (T-T')/2?
The variational lower bound (VLB) of Theorem 3 does not appear to require a decoder, contrary to the comment in the sentence immediately following the statement of the theorem; it seems as though it instead requires a good estimate of the conditional likelihood (a prediction from either T to T' or vice versa).
Clarity, Quality, Novelty And Reproducibility
It might be more clear to note the differing time-blocks at
T
0
and
T
1
, since there is no reliance on any symmetry pattern (i.e., there is nothing special about -T versus T as far as I understand.
It might be also helpful to be more clear about the lifting process/simulation for the attractor datasets. While the general idea is conveyed, the exact details it seems could be easily shared (or code given to reproduce the test cases). Overall however this paper seems reproducible.
Overall the bounds are restatements of other results; this is somewhat clear in the paper, but it should still be noted re:Novelty for the purposes of review. There are several other differentiable mutual information estimators, including some that avoid the Gaussian encoder function. Though the paper is already quite thorough, it may be helpful to also test these functions, e.g. the correction of MINE in Song et al 2019 (called SMILE), and the Echo noise encoders in Brekelmans et al 2018. |
ICLR | Title
Compressed Predictive Information Coding
Abstract
Unsupervised learning plays an important role in many fields, such as machine learning, data compression, and neuroscience. Compared to static data, methods for extracting low-dimensional structure for dynamic data are lagging. We developed a novel information-theoretic framework, Compressed Predictive Information Coding (CPIC), to extract predictive latent representations from dynamic data. Predictive information quantifies the ability to predict the future of a time series from its past. CPIC selectively projects the past (input) into a low dimensional space that is predictive about the compressed data projected from the future (output). The key insight of our framework is to learn representations by balancing the minimization of compression complexity with maximization of the predictive information in the latent space. We derive tractable variational bounds of the CPIC loss by leveraging bounds on mutual information. The CPIC loss induces the latent space to capture information that is maximally predictive of the future of the data from the past. We demonstrate that introducing stochasticity in the encoder and maximizing the predictive information in latent space contributes to learning more robust latent representations. Furthermore, our variational approaches perform better in mutual information estimation compared with estimates under the Gaussian assumption commonly used. We show numerically in synthetic data that CPIC can recover dynamical systems embedded in noisy observation data with low signal-to-noise ratio. Finally, we demonstrate that CPIC extracts features more predictive of forecasting exogenous variables as well as auto-forecasting in various real datasets compared with other state-of-the-art representation learning models. Together, these results indicate that CPIC will be broadly useful for extracting low-dimensional dynamic structure from high-dimensional, noisy timeseries data.
1 INTRODUCTION
Unsupervised methods play an important role in learning representations that provide insight into data and exploit unlabeled data to improve performance in downstream tasks in diverse application areas Bengio et al. (2013); Chen et al. (2020); Grill et al. (2020); Devlin et al. (2018); Brown et al. (2020); Baevski et al. (2020); Wang et al. (2020). Prior work on unsupervised representation learning can be broadly categorized into generative models such as variational autoencoders(VAEs) (Kingma & Welling, 2013) and generative adversarial networks (GAN) (Goodfellow et al., 2014), discriminative models such as dynamical components analysis (DCA) (Clark et al., 2019), contrastive predictive coding (CPC) (Oord et al., 2018), and deep autoencoding predictive components (DAPC) (Bai et al., 2020). Generative models focus on capturing the joint distribution between representations and inputs, but are usually computationally expensive. On the other hand, discriminative models emphasize capturing the dependence of data structure in the low-dimensional latent space, and are therefore easier to scale to large datasets.
In the case of time series, some representation learning models take advantage of an estimate of mutual information between encoded past (input) and the future (output) (Creutzig & Sprekeler, 2008; Creutzig et al., 2009; Oord et al., 2018). Although previous models utilizing mutual information extract low-dimensional representations, they tend to be sensitive to noise in the observational space. DCA directly makes use of the mutual information between the past and the future (i.e., the predictive information (Bialek et al., 2001)) in a latent representational space that is a linear embedding of the observation data. However, DCA operates under Gaussian assumptions for mutual information
estimation. We propose a novel representation learning framework which is not only robust to noise in the observation space but also alleviates the Gaussian assumption and is thus more flexible.
We formalize our problem in terms of data generated from a stationary dynamical system and propose an information-theoretic objective function for Compressed Predictive Information Coding (CPIC). Instead of leveraging the information bottleneck (IB) objective directly as in Creutzig & Sprekeler (2008) and Creutzig et al. (2009), where the past latent representation is directly used to predict future observations, we predict the compressed future observations filtered by the encoder. It is because that in the time series setting, future observations are noisy, and treating them as labels is not insightful. Specifically, our target is to extract latent representation which can better predict future underlying dynamics. Since the compressed future observations are assumed to only retain the underlying dynamics, better compression thus contributes to extracting better dynamical representation. In addition, inspired by Clark et al. (2019) and Bai et al. (2020), we extend the prediction from single input to a window of inputs to handle high order predictive information.
Moreover, instead of directly estimating the objective information with Gaussian assumption (Creutzig & Sprekeler, 2008; Creutzig et al., 2009; Clark et al., 2019; Bai et al., 2020), we developed variational bounds and a tractable end-to-end training framework based on the neural estimator of mutual information studied in Poole et al. (2019). Note that our inference first leverages the variational boundary technique for self-supervised learning on the time series data. Since it alleviates the Gaussian assumption, it is applicable to a much larger class of dynamical systems.
In CPIC, we also demonstrate that introducing stochasticity into either a linear or nonlinear encoder robustly contributes to numerically better representations in different tasks. In particular, we illustrate that CPIC can recover trajectories of a chaotic dynamical system embedded in highdimensional noisy observations with low signal-to-noise ratios in synthetic data. Furthermore, we conduct numerical experiments on four real-world datasets with different goals. In two neuroscience datasets, monkey motor cortex (M1) and rat dorsal hippocampus (HC), compared with the state-ofthe-art methods, we show that the latent representations extracted from CPIC have better forecasting accuracy for the exogenous variables of the monkey’s future hand position for M1, and for the rat’s future position for HC. In two other real datasets, historical hourly weather temperature data (TEMP) and motion sensor data (MS), we show that latent representations extracted by CPIC have better forecasting accuracy of the future of those time series than other methods. In summary, the primary contributions of our paper are as follows:
• We developed a novel information-theoretic self-supervised learning framework, Compressed Predictive Information Coding (CPIC), which extracts low-dimensional latent representation from time series. CPIC maximizes the predictive information in the latent space while minimizing the compression complexity.
• We introduced the stochastic encoder structure where we encode inputs into stochastic representations to handle uncertainty and contribute to better representations.
• Based on prior works, we derived the variational bounds of the CPIC’s objective function and a tractable, end-to-end training procedure. Since our inference alleviates the Gaussian assumption common to other methods, it is applicable to a much larger class of dynamical systems. Moreover, to the best of our knowledge, our inference is the first to leverage the variational boundary technique for self-supervised learning on time series data.
• We demonstrated that, compared with the other unsupervised based methods, CPIC more robustly recovers latent dynamics in dynamical system with low signal-to-noise ratio in synthetic experiments, and extracts more predictive features for downstream tasks in various real datasets.
2 RELATED WORK
Mutual information (MI) plays an important role in estimating the relationship between pairs of variables. It is a reparameterization-invariant measure of dependency:
I(X,Y ) = Ep(x,y) [ log
p(x|y) p(x)
] (1)
It is used in computational neuroscience (Dimitrov et al., 2011), visual representation learning (Chen et al., 2020), natural language processing (Oord et al., 2018) and bioinformatics (Lachmann et al., 2016). In representation learning, the mutual information between inputs and representations is used to quantify the quality of the representation and is also closely related to reconstruction error in generative models (Kingma & Welling, 2013; Makhzani et al., 2015). Estimating mutual information is computationally and statistically challenging except in two cases: discrete data, as in Tishby et al. (2000) and Gaussian data, as in Chechik et al. (2005). However, these assumptions both severely constrain the class of learnable models (Alemi et al., 2016). Recent works leverage deep learning models to obtain both differentiable and scalable MI estimation (Belghazi et al., 2018; Nguyen et al., 2010; Oord et al., 2018; Alemi et al., 2016; Poole et al., 2019; Cheng et al., 2020).
In terms of representation learning in time series, Wiskott & Sejnowski (2002); Turner & Sahani (2007) targeted slowly varying features, Creutzig & Sprekeler (2008) utilized the information bottleneck (IB) method (Tishby et al., 2000) and developed an information-theoretic objective function. Creutzig et al. (2009) proposed an alternative objective function based on a specific state-space model. Recently, Oord et al. (2018) proposed CPC to extract dynamic information based on an autoregressive model on representations and contrastive loss on predictions. Clark et al. (2019); Bai et al. (2020) proposed unsupervised learning approach to extract low-dimensional representation with maximal predictive information(PI). All of the above unsupervised representation learning models, except for CPC, assume the data to be Gaussian, which may be not realistic, especially when applied to neuroscience datasets (O’Doherty et al., 2017; Glaser et al., 2020), given the nonGaussianity of neuronal activity. Here, we leverage recently introduced neural estimation of mutual information to construct upper bounds of the CPIC objective and develop an end-to-end training procedure. CPIC enables generalization beyond the Gaussian case and autoregressive models.
Recently, deep encoder networks are leveraged to model nonlinear relations between latent representations and observed data in time series (Chen et al., 2020; Bai et al., 2020; He et al., 2020). However, use of complicated nonlinear encoders induced hinders computational efficiency (Wang et al., 2019). CPIC proposes an efficient representation learning framework for time series that encodes data with maximal predictive information. We also note that there exists several works on the time series modeling from generative modeling perspective. Initially, Fabius & Van Amersfoort (2014) leveraged the recurrent neural network with variational autoencoder to model time series data. Frigola et al. (2014) proposed variational Gaussian-process state-space model. Meng et al. (2021) proposed variational structured Gaussian-process regression network which can efficiently handle more complicated relationships in time series. Most generative modeling inference would depend on the length of time series, while the inference of CPIC depends on the window size T , which is more scalable for long time series.
3 COMPRESSED PREDICTIVE INFORMATION CODING
The main intuition behind Compressed Predictive Information Coding (CPIC) is to extract low dimensional representations with minimal compression complexity and maximal dynamical structure. Specifically, CPIC first discards low-level information that is not relevant for dynamic prediction and noise that is more local by minimizing compression complexity (i.e., mutual information) between inputs and representations to improve model generalization. Second, CPIC maximizes the predictive information in the latent space of compressed representations.
Compared with Clark et al. (2019); Bai et al. (2020), CPIC first utilizes stochastic encoder to handle uncertainty of representations, which contributes to more robust representations, and also relieves the Gaussian assumption by constructing bounds of mutual information based on neural estimations. In more detail, instead of employing a deterministic linear mapping function as the encoder to compress data as in Clark et al. (2019), CPIC takes advantage of a stochastic linear or nonlinear mapping function. Given inputs, the stochastic representation follows Gaussian distributions, with means and variances encoded from any neural network structure. A nonlinear CPIC utilizes a stochastic nonlinear encoder which is composed of a nonlinear mean encoder and a linear variance encoder, while a linear CPIC utilizes a stochastic linear encoder which is composed of a linear mean encoder and a linear variance encoder. Note that stochastic representations conditioned on inputs are parameterized as a conditional Gaussian distribution, but the marginal distribution of the representation is a mixture of Gaussian distribution, which is widely recognized as universal approximator of densities.
On the other hand, avoiding the Gaussian assumption on mutual information (Creutzig & Sprekeler, 2008; Creutzig et al., 2009; Clark et al., 2019; Bai et al., 2020), CPIC leverages neural estimations of mutual information. Specifically, we propose differentiable and scalable bounds of the CPIC objective via variational inference, which enables end-to-end training.
Formally, let X = {xt}, xt ∈ RN be a stationary, discrete time series, and let Xpast = (x−T+1, . . . , x0) and Xfuture = (x1, . . . , xT ) denote consecutive past and future windows of length T. Then both past and future data are compressed into past and future representations denoted as Ypast = (y−T+1, . . . , y0) and Yfuture = (y1, . . . , yT ) with embedding dimension size Q. Similar to the information bottleneck (IB) (Tishby et al., 2000), the CPIC objective contains a trade-off between two factors. The first seeks to minimize the compression complexity and the second to maximize the predictive information in the latent (representation) space. Note that when the encoder is deterministic the compression complexity is deprecated and when the encoder is stochastic the complexity is measured by the mutual information between representations and inputs. In the CPIC objective, the trade-off weight β > 0 dictates the balance between the compression and predictive information terms:
min ψ
L, where L ≡ β(I(Xpast;Ypast) + I(Xfuture;Yfuture))− I(Ypast;Yfuture) (2)
where ψ refer to the model parameters which encode inputs X to latent variables Y . Larger β promotes a more compact mapping and thus benefits model generalization, while smaller β leads to more predictive information in the latent space on training data. This objective function is visualized in Figure 1, where inputs X are encoded into latent space as Y via tractable encoders and the dynamics of Y are learned in a model-free manner.
The encoder p(Y |X) could be implemented by fitting deep neural networks (Alemi et al., 2016) to encode data X . Instead, CPIC takes an approach similar to VAEs (Kingma & Welling, 2013), in that it encodes data into stochastic representations. In particular, CPIC employs a stochastic encoder (genc in Figure 1) to compress input xt into yt as
yt|xt ∼ N (µt, diag(σ2t )) , (3)
for each time stamp t. The mean of yt is given by µt = gEncoderµ (xt), whereas the variance arises from σt = gEncoderσ (xt).
Encoders gEncoderµ and g Encoder σ can be any nonlinear mapping and is usually modeled using neural network architectures. We use a twolayer perceptron with ReLU activation function (Agarap, 2018) for a nonlinear mapping. In terms of a linear CPIC, we specify the mean of representation as µt = uTxt. In both linear and nonlinear CPIC setting, if σt = 0, the stochastic encoder reduces to a deterministic encoder.
We extend single input to multiple inputs in the CPIC framework in terms of a specified window size T . The selection of window size is discussed in Appendix A. Due to the stationary assumption, the relation between past/future
blocks of input data X(−T ), X(T ) ∈ RN×T and encoded data Y (−T ), Y (T ) ∈ RQ×T are equivalent, pX(−T ),Y (−T ) = pX(T ),Y (T ). Note that −T and T indexes to past and future T data. Without loss of generality, the compression relation can be expressed as Y (T ) = gEncoderµ (X(T )) + ξ(T ), where ξ(T ) ∈ N (0, blockdiag(diag(σ21), . . . , diag(σ2T )) and noise standard deviation σt = gEncoderσ (xt).
4 VARIATIONAL BOUNDS OF COMPRESSED PREDICTIVE INFORMATION CODING
In CPIC, since data X are stationary, the mutual information between the input data and the compressed data for the past is equivalent to that for the future I(X(−T );Y (−T )) = I(X(T );Y (T )). Therefore, the objective of CPIC can be rewritten as
minL = βI(X(T );Y (T ))− I(Y (−T );Y (T )) . (4)
We developed the variational upper bounds on mutual information for the compression complexity I(X(T );Y (T )) and lower bounds on mutual information for the predictive information I(Y (−T );Y (T )).
4.1 UPPER BOUNDS OF COMPRESSION COMPLEXITY
In the section, we derived a tractable variational upper bound (VUB) depending on a single sample and a leave-one-out upper bound (L1Out) (Poole et al., 2019) depending on multiple samples.
Theorem 1 By introducing a variational approximation r(y(T )) to the marginal distribution p(y(T )), a tractable variational upper bound of mutual information I(X(T );Y (T )) is derived as IVUB(X(T );Y (T )) = EX(T ) [ KL(p(y(T )|x(T )), r(y(T ))) ] .
Theorem 2 By utilizing a Monte Carlo approximation for variational distribution r(y(T )), the L1Out upper bound of mutual information I(X(T );Y (T )) is derived as IL1Out(X(T );Y (T )) = E [ 1 S ∑S i=1 [ log p(y(T )i|x(T )i)1 S−1 ∑ j ̸=i p(y(T )i|x(T )j) ]] , where S is the sample size.
The derivation details are in Appendix B and C. In practice, the L1Out bound depends on the sample size S and may suffer from numerical instability. Thus, we would like to choose the sample size S as large as possible. In general scenarios where p(y(T )|x(T )) is intractable, Cheng et al. (2020) proposed a variational version of VUB and L1Out by using a neural network to approximate the condition distribution p(y(T )|x(T )). Since the conditional distribution p(y(T )|x(T )) is parameterized as a known stochastic/deterministic encoder in CPIC, those variational versions are not taken into consideration.
4.2 LOWER BOUNDS OF PREDICTIVE INFORMATION
For the predictive information (PI), we derived lower bounds of I(Y (−T );Y (T )) using results in Agakov (2004); Alemi et al. (2016); Poole et al. (2019). In particular, we derived tractable unnormalized Barber and Agakov (TUBA) (Barber & Agakov, 2003) lower bounds depending on a single sample and an infoNCE lower bound (Oord et al., 2018) depending on multi samples. All derivation details are discussed in Appendix D, E and F.
Theorem 3 We derived a lower bound on predictive information (PI) I(Y(-T); Y(T)) as IV LB(Y (−T );Y (T )) = H(Y (T )) + Ep(y(−T ),y(T ))[log q(y(T )|y(−T ))], where q(y(T )|y(−T )) is a variational conditional distribution.
However, this lower bound requires a tractable decoder for the conditional distribution q(y(T )|y(−T )) (Alemi et al., 2016). Alternatively we derived a TUBA lower bound (Barber & Agakov, 2003) which is free of the parametrization of decoder.
Theorem 4 By introducing a differentiable critic function f(x, y) and a baseline function a(y(T )) defined in Appendix E, the TUBA lower bound of predictive information is derived as ITUBA(Y (−T ), Y (T )) = Ep(y(−T ),y(T ))[f̃(y(−T ), y(T ))]−log ( Ep(y(−T ))p(y(T ))[ef̃(y(−T ),y(T ))] ) where f̃(y(−T ), y(T )) = f(y(−T ), y(T ))− log(a(y(T ))).
Different forms of the baseline function lead to different neural estimators in the literature such as MINE (Belghazi et al., 2018) and NWJ (Nguyen et al., 2010). On the other hand, all TUBA based
estimators have high variance due to the high variance of f(x, y). Oord et al. (2018) proposed a low-variance MI estimator based on noise-contrastive estimation called InfoNCE. Moreover, there exists other differentiable mutual information estimator including SMILE (Song & Ermon, 2019) and Echo noise estimator (Brekelmans et al., 2019).
Theorem 5 In the CPIC setting, the InfoNCE lower bound of predictive information is derived as
IinfoNCE(Y (−T );Y (T )) = E
[ 1
S S∑ i=1 log ef(y(−T )i,y(T )i) 1 S ∑S j=1 e f(y(−T )i,y(T )j)
] (5)
The expectation is over S independent samples from the joint distribution: p(y(−T ), y(T )) following Markov Chain rule in Figure 1 such as p(y((−T ), y(T )) =∫ p(x(−T ), x(T ))p(y(−T )|x(−T ))p(y(T )|x(T ))dx(−T )x(T ).
4.3 VARIATIONAL BOUNDS OF CPIC
We propose two classes of upper bounds of CPIC based on whether the bounds depend on a single sample or multiple samples. According to the uni-sample and multi-sample bounds derived in Section 4.1 and Section 4.2, we name the first class as uni-sample upper bounds, which take the VUB upper bound of mutual information for the complexity of data compression I(X(T ), Y (T )) and the TUBA as the lower bound of predictive information in equation 14. Thus we have
LUNI = βKL(p(y(T )|x(T )), r(y(T )))− ITUBA(Y (−T ), Y (T )) . (6)
Notice that by choosing different baseline functions, the TUBA lower bound would be equivalent to different mutual information estimator such as MINE and NWJ. The second class is named as multi-sample upper bound, which take advantage of the noise-contrastive estimation approach. The multi-sample upper bound is expressed as
LMUL = βIL1Out(X(T );Y (T ))− IinfoNCE(Y (−T );Y (T )) . (7)
Two main differences exist between these classes of upper bounds. First, the performance of multisample upper bound depend on batch size while uni-sample upper bounds do not, so when computational budgets do not allow large batch size in training, uni-sample upper bounds may be preferred in training. Secondly, multi-sample upper bound has lower variance than uni-sample upper bounds. Thus, they have different strengths and weaknesses depending on the context. We evaluated the performance of those variational bounds of CPIC in terms of the reconstruction performance in synthetic experiments in Appendix G, and find that with sufficiently large batch size, the multi-sample upper bound would outperform most of the uni-sample upper bounds. Thus, without further specification, we choose the multi-sample upper bound as the variational bounds of CPIC objective in this work. Furthermore, we classify the upper bounds into stochastic and deterministic versions by whether we employ a deterministic or stochastic encoder. Notice that when choosing the deterministic encoder, the compression complexity term (first term) in equation 6 and equation 7 are constant.
5 NUMERICAL EXPERIMENTS
In this section, we demonstrate the superior performance of CPIC in both synthetic and real data experiments. We first examine the reconstruction performance of CPIC in noisy observations of a dynamical system (the Lorenz Attractor). The results show CPIC better recovers the latent trajectories from noisy high dimensional observations. Moreover, we demonstrate that maximizing the predictive information(PI) in the compressed latent space is more effective than maximizing PI between latent and observation space as in Creutzig & Sprekeler (2008); Creutzig et al. (2009), and also demonstrate the benefits of the stochastic representation over the deterministic representation. Secondly, we demonstrate better predictive performance of the representation evaluated by linear forecasting. The motivation for using linear forecasting models is that good representations contribute to disentangling complex data in a linearly accessible way (Clark et al., 2019). Specifically, we extract latent representations and then conduct forecasting tasks given the inferred representations
on two neuroscience datasets and two other real datasets. The two neuroscience datasets are multineuronal recordings from the hippocampus (HC) while rats navigate a maze (Glaser et al., 2020) and multi-neuronal recordings from primary motor cortex (M1) during a reaching task for monkeys (O’Doherty et al., 2017). The two other real datasets are multi-city temperature data (TEMP) from 30 cities over several years (Gene, 2017) and 12 variables from an accelerater, gyroscope, and gravity motion sensor (MS) recording human kinematics (Malekzadeh et al., 2018). The forecasting tasks for the neuroscience data sets is to predict the future of the relevant exogenous variables from the past neural data, while the forecasting task for the other datasets is to predict the future of those time-series from their past. The results illustrate that CPIC has better predictive performance on these forecasting tasks compared with existing methods.
5.1 SYNTHETIC EXPERIMENT WITH NOISY LORENZ ATTRACTOR
The Lorenz attractor is a 3D time series that are realizations of the Lorenz dynamical system (Pchelintsev, 2014). It describes a three dimensional flow generated as:
dx dt = σ(y − x), dy dt = f1(ρ− z)− y, dz dt = xy − γz . (8)
Lorenz sets the values σ = 10, ρ = 8/3 and γ = 28 to exhibit chaotic behavior, as done in recent works (She & Wu, 2020; Clark et al., 2019; Zhao & Park, 2017; Linderman et al., 2017). We simulated the trajectories from the Lorenz dynamical system and show them in the left-top panel in Figure 2. We then mapped the 3D latent signals to 30D lifted observations with a random linear embedding in the left-middle panel and add spatially anisotropic Gaussian noise on the 30D lifted observations in the left-bottom panel. The noises are generated according to different signal-to-noise ratios (SNRs), where SNR is defined by the ratio of the variance of the first principle components of dynamics and noise as in Clark et al. (2019). Specifically, we utilized 10 different SNR levels spaced evenly on a log (base 10) scale between [-3, -1] and corrupt the 30D lifted observations with noise corresponding to different SNR levels. Details of the simulation are available in Appendix G Finally,
we deploy different variants of CPICs to recover the true 3D dynamics from different corrupted 30D lifted observations with different SNR levels, and compare the accuracy of recovering the underlying Lorenz attractor time-series.
We aligned the inferred latent trajectory with the true 3D dynamics with optimal linear mapping due to the reparameterization-invariant measure of latent trajectories. We validated the reconstruction performance based on theR2 regression score of the extracted vs. true trajectories. We first compare the reconstruction performance on different variational bounds of CPIC with the latent dimension size Q = 3 and the time window size T = 4, and find that multi-sample upper bound outperforms uni-sample upper bounds for almost all of the 10 SNR levels. Thus, we recommend the multi-sample upper bound for CPIC in practice and use that for further results. We also find that, compared to DCA (Clark et al., 2019) and CPC (Oord et al., 2018) CPIC is more robust to noise and thus better extracts the true latent trajectory from the noisy high dimensional observations. The detailed results are reported in Appendix H
In order to demonstrate the benefits of introducing stochasticity in the encoder and maximizing the predictive information in latent space, we considered four variants of CPICs: with stochastic or deterministic encoder, and with predictive information in latent space or between latent and observation space. All four variants of CPIC models utilize the latent dimension size Q = 3 and the time window size T = 4. For each model and each SNR level, we run 100 replicates with random initializations. We show the aligned latent trajectories inferred from corrupted lifted observation for high, intermediate and low SNR (0.001, 0.01, 0.1) levels of noise with the median R2 scores across 100 replicates in Figure 2. The point-wise distances between the recovered dynamics and the ground-truth dynamics are encoded in the colors from blue to red, corresponding to short to long distance. For high SNR (SNR = 0.1, topright), all models did a good job of recovering the Lorenz dynamics though the stochastic CPIC with predictive information on latent space had larger R2 than others. For intermediate SNR (SNR = 0.008, middle-right), we see that stochastic CPICs performs much bet-
ter than the deterministic CPICs. Finally, as the SNR gets lower (SNR = 0.001, bottom-right) all methods perform poorly, but we note that, numerically, considering predictive information in latent space is much better than that between latent and observation space.
To more thoroughly characterize the benefits of stochastic encoding and PI in the latent space, we examined the mean of R2 scores for the four variants on each level of SNR across N = 10 and N = 100 replicates in the top row of Figure 3. It shows that the CPIC with stochastic representations and PI in latent space robustly outperforms other variants on average. We also report the best R2 scores for the four variants in the sense that we report the R2 score for the model with the smallest training loss across N runs. The bottom row of Figure 3 shows that CPIC with stochastic representation and PI in latent space achieves better reconstruction and robustness to noise than other variants, especially when the number of runs N is small. Even when N is large, stochastic CPIC with PI in latent space greatly outperforms others when the noise level is high. We note that in the case of high-dimensional noisy observations with large numbers of samples common in many modern real-world time series datasets, CPICs robustness to noise and capacity to achieve good results in a small number of runs is a clear advantage. Moveover, we displayed the quantile anaylsis of the R2 scores in Appendix I with consistent result.
5.2 REAL EXPERIMENTS WITH DIVERSE FORECASTING TASKS
In this section, we show that latent representations extracted by stochastic CPIC perform better in the downstream forecasting tasks on four real datasets. We compared stochastic CPIC with contrastive predictive coding (CPC) (Oord et al., 2018), PCA, SFA (Wiskott & Sejnowski, 2002), DCA (Clark et al., 2019) and deterministic CPIC. As for CPC, we use a linear encoder for fair comparison. In addition, we compared the result from CPCs and CPICs with nonlinear encoder in which the linear mean encoder is replaced by a multi-layer perceptron. For each model, we extract the latent representations (conditional mean) and conduct prediction tasks on the relevant exogenous variable at a future time step for the neural datasets. For example, for the M1 dataset, we extract a consecutive 3-length window representation of multi-neuronal spiking activity to predict the monkey’s arm position in a future time step which is lag time stamps away. The details of experiments are available in Appendix J. Neuroscientists often want to interpret latent representations of data to gain insight into the processes that generate the observed data. Thus, we used linear regression 1 to predict exogenous variables, with the intuition that a simple (i.e., linear) prediction model will only be sensitive to the structure in the data that is easiest to interpret as in (Yu et al., 2008; Pandarinath et al., 2018; Clark et al., 2019). Furthermore, the neuroscience data sets (M1 and HC) present extremely challenging settings for prediction of the exogenous variables due to severe experimental undersampling of neurons due to technical limitations, as well as sizeable noise magnitudes. For these tasks, R2 regression score is used as the evaluation metric to measure the forecasting performance. Four datasets are split into 4:1 train and test data and the forecasting task considered three different lag values (5, 10, and 15). For DCA and deterministic/stochastic CPICs, we took three different window sizes T = 1, 2, 3 and report the best R2 scores. Table 1 reports all R2 scores and demonstrates that our stochastic CPIC outperforms all other models except for the case for Temp data with forecasting at lag 15.
6 CONCLUDING REMARKS
We developed a novel information-theoretic framework, Compressed Predictive Information Coding, to extract representations in sequential data. CPIC balances the maximization of the predictive information in latent space with the minimization of the compression complexity of the latent representation. We leveraged stochastic representations by employing a stochastic encoder and developed variational bounds of the CPIC objective function. We demonstrated that CPIC extracts more accurate low-dimensional latent dynamics and more useful representations that have better forecasting performance in diverse downstream tasks in four real-world datasets. Together, these results indicate that CPIC will yield similar improvements in other real-world scenarios. Moreover, we note that in most real datasets, using nonlinear CPIC would lead to better representation in terms of prediction performance than linear CPIC.
1https://scikit-learn.org/stable/modules/linear model.html
A SELECTION OF WINDOW SIZE
Selecting optimal window size T is important for the downstream use of the dynamics. Poor selection of T may cause aliasing artifacts. In general, we nee to select it by cross validation. Furthermore, we can make plots of the predictive information as a function of both window size T and the embedding dimension Q as diagnostic tools.
B DERIVATION OF IV UB
Directly estimating the compression complexity is intractable, because I(X(T );Y (T )) := EX(T ) [ KL(p(y(T )|x(T )), p(y(T ))) ] in which the population distribution p(y(T )) is unknown. Thus we introduce a variational approximation to the marginal distribution of encoded inputs
p(y(T )), denoted as r(y(T )). Due to the non-negativity of the Kullback-Leibler (KL) divergence, the variational upper bound (VUB) is derived as
I(X(T );Y (T )) = EX(T ) [ KL(p(y(T )|x(T )), r(y(T ))) ] − KL(p(y(T )), r(y(T )))
≤ EX(T ) [ KL(p(y(T )|x(T )), r(y(T ))) ] = IVUB(X(T );Y (T )) . (9)
C DERIVATION OF IL1Out
Generally, learning r(y(T )) was recognised as the distribution density estimation problem (Silverman, 2018), which is challenging. In this setting, the variational distribution r(y(T )) is assumed to be learnable, and thus estimating the variational upper bound is tractable. In particular, Alemi et al. (2016) fixed r(y(T )) as a standard normal distribution, leading to high-bias in MI estimation. Recently, Poole et al. (2019) utilized a Monte Carlo approximation for variational distribution. In our case, with S sample pairs (x(T )i, y(T )i)Si=1, ri(y(T )) = 1 S−1 ∑ j ̸=i p(y(T )|x(T )j) ≈ p(y(T )) and the L1Out is derived as below:
IL1Out(X(T );Y (T )) = E
[ 1
S S∑ i=1
[ log
p(y(T )i|x(T )i) 1 S−1 ∑ j ̸=i p(y(T )i|x(T )j)
]] . (10)
D DERIVATION OF IV LB
Similar to Agakov (2004), we replace the intractable conditional distribution p(y(T )|y(−T )) with a tractable optimization problem over a variational conditional distribution q(y(T )|y(−T )). It yields a lower bound on PI due to the non-negativity of the KL divergence:
I(Y (−T );Y (T )) ≥ H(Y (T )) + Ep(y(−T ),y(T ))[log q(y(T )|y(−T ))] (11)
where H(Y ) is the differential entropy of variable Y and this bound is tight if and only if q(y(T )|y(−T )) = p(y(T )|y(−T )), suggesting that the second term in equation 11 equals the negative conditional entropy −H(Y (T )|Y (−T )). However the variational lower bound requires a tractable decoder for the conditional q(y|x). Alternatively, by considering an energy-based variational family for conditional distribution
The conditional expectation in equation 11 can be estimated using Monte Carlo sampling based on the encoded data distribution p(y(−T ), y(T )). And encoded data are sampled by introducing the augmented data x(−T ) and x(T ) and marginalizing them out as
p(y((−T ), y(T )) = ∫ p(x(−T ), x(T ))p(y(−T )|x(−T ))p(y(T )|x(T ))dx(−T )x(T ) (12)
according to the Markov chain proposed in Figure 1.
E DERIVATION OF ITUBA
According to Poole et al. (2019), by considering an energy-based variational family to express and conditional distribution q(y(T )|y(−T )):
q(y(T )|y(−T )) = p(y(T ))e f(y(T ),y(−T ))
Z(y(−T )) (13)
where f(x, y) is a differentiable critic function, Z(y(−T )) = Ep(y(T ) [ ef(y(T ),y(−T )) ] is a partition function, and introducing a baseline function a(y(T )), we derived a tractable TUBA lower bound (Barber & Agakov, 2003) of the predictive information as:
I(Y (−T ), Y (T )) ≥ Ep(y(−T ),y(T ))[f̃(y(−T ), y(T ))]− log ( Ep(y(−T ))p(y(T ))[ef̃(y(−T ),y(T ))] ) = ITUBA(Y (−T ), Y (T )) (14)
where f̃(y(−T ), y(T )) = f(y(−T ), y(T ))− log(a(y(T ))) is treated as an updated critic function. Notice that different choices of baseline functions lead to different mutual information estimators.
When a(y(T )) = 1, it leads to mutual information neural estimator (MINE) (Belghazi et al., 2018); when a(y(T )) = Z(y(T )), it leads to the lower bound proposed in Donsker & Varadhan (1975) (DV) and when a(y(T )) = e, it recovers the lower bound in Nguyen et al. (2010) (NWJ) also known as f-GAN (Nowozin et al., 2016) and MINE-f (Belghazi et al., 2018). In general, the critic function f(x, y) and the log baseline function a(y) are usually parameterized by neural networks (Oord et al., 2018; Belghazi et al., 2018): Oord et al. (2018) used a separable critic function f(x, y) = hθ(x)
T gθ(y), while Belghazi et al. (2018) used a joint critic function f(x, y) = fθ(x, y), and Poole et al. (2019) claimed that joint critic function generally performs better than separable critic function but scale poorly with batch size.
F DERIVATION OF IinfoNCE
The derivation of infoNCE in our CPIC setting is trivial by treating Y (−T ) and Y (T ) as the input and output in the infoNCE formula from the CPC setting (Oord et al., 2018).
G DETAILS OF SIMULATION
In this section, we first generated the 3D latent signals according to the Lorenz dynamic system 8 denoted as X ∈ R3×T . We calculated the largest eigenvalue of the covariance matrix of X as dynamic variance denoted as σ2dynamics, and the noise variance is σ 2 noise = σ 2 dynamics/SNR where SNR is signal-to-noise ratio. Then we randomly generate a semi orthogonal matrix V ∈ R30×3. Then we generated the true 30D signal V X embedded with additive spatially structured white noise, where the noise subspace Vnoise is generated with median principle angles with respect to dynamics subspaces V . The noise covariance is generated via Σnoise with the largest eigenvalue σ2noise, and then we generate the noisy signal at the nth dimension by [Ynoisy]n ∼ N (vTnX,Σnoise), n = 1, . . . 30.
H MODEL COMPARISON IN TERMS OF R2 REGRESSION SCORE IN THE NOISY LORENZ ATTRACTOR EXPERIMENT
In this section, theR2 regression scores for CPC, DCA, deterministic & stochastic CPICs (three unisample upper bounds in terms of NWJ, MINE, TUBA, and one multi-sample upper bound) for all ten different SNRs are reported in Table 2. It shows that stochastic CPIC with multi-sample upper bound outperforms other approaches in majority of SNRs. It also shows that that CPIC is most robust to the noisy data and thus detect best latent trajectories from noisy observation compared with CPC and DCA.
We also show the aligned latent trajectories inferred from corrupted lifted observation for high, intermediate and low SNR (0.001, 0.01, 0.1) levels of noise with the median R2 scores across 100 replicates for PCA and DCA (as the extension of Figure 2) in Figure 4. The point-wise distances between the recovered dynamics and the ground-truth dynamics are encoded in the colors from blue to red, corresponding to short to long distance. It show that stochastic CPIC outperforms both PCA and DCA.
I COMPARISON ON R2 SCORES OF LATENT DYNAMICS REGRESSION FOR NOISY LORENZ ATTRACTOR IN TERMS OF QUANTILE ANALYSIS
We displayed the medium performance (with the inter-quantile range as the error bars) of R2 scores of latent dynamics regression for noisy Lorenz attractor in Figure 5.
J DETAILS OF REAL-WORLD EXPERIMENTS
The four real data are Monkey motor cortical dataset (M1), Rat hippocampal data (HC), Temperature dataset (Temp) and Accelerate dataset (MS).
J.1 MONKEY MOTOR CORTICAL DATASET
O’Doherty et al. (2017) released multi-electrode spiking data for both M1 and S1 for two monkeys during a continuous grid-based reaching task. We used M1 data from the subject “Indy” (specifically, we used the file “indy 20160627 01.mat”). We discarded single units with fewer than 5,000 spikes, leaving 109 units. We binned the spikes into non-overlapping bins , square-root transformed the data and mean-centered the data using a sliding window 30 s in width.
𝑁 = 10 𝑁 = 100
J.2 RAT HIPPOCAMPAL DATA
Glaser et al. (2020) released the original data. The data consist of 93 minutes of extracellular recordings from layer CA1 of dorsal hippocampus while a rat chased rewards on a square platform. We discarded single units with fewer than 10 spikes, leaving 55 units. We binned the spikes into nonoverlapping 50 ms bins, then square-root transformed the data.
J.3 TEMPERATURE DATASET
The temperature dataset consists of hourly temperature data for 30 U.S. cities over a period of 7 years from OpenWeatherMap.org. We downsampled the data by a factor of 24 to obtain daily temperatures.
J.4 ACCELEROMETER DATASET
Malekzadeh et al. (2018) released accelerometer data which records roll, pitch, yaw, gravity x, y, z, rotation x, y, z and acceleration x, y, z for a total of 12 kinematic variables. The sampling rate is 50 Hz. We used the file “sub 19.csv” from “A DeviceMotion data.zip”.
J.5 FORECASTING TASK
The forecasting task is the same in Clark et al. (2019). We use the extracted consecutive 3-length window representation of endogenous data to forecast the future relevant exogenous variables at log n. In M1 and HC, the endogenous variables are processed spiking data, and the exogenous variables are location data. In Temp and MS, we assume endogenous variables and exogenous variables are the same, 30 U.S. cities’ hourly temperature for Temp data and 12 kinematic variables for MS data. | 1. What is the main contribution of the paper regarding time-series prediction?
2. What are the strengths and weaknesses of the proposed approach, particularly in its originality and empirical results?
3. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
4. Are there any questions or concerns regarding the paper's methodology, experiments, and comparisons with other works? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
The authors propose a method for self-supervised time-series prediction called Compressed Predictive Information Coding (CPIC). The core idea is that a sequence of data points are mapped to a Gaussian latent space via some encoder function. Then, these latent representations can be used for downstream tasks, e.g. regression. The authors propose minimizing the difference between two computationally intractable mutual information terms to train the model. To make their approach practically applicable, the authors propose to minimize a variational upper bound to the original objective. The authors test their approach on synthetic and some real-world regression problems.
Strengths And Weaknesses
Strengths
The proposed approach is well-motivated and an interesting idea. It is quite simple; hence, I wonder if it has already been proposed elsewhere. However, I am not familiar enough with the sequential prediction literature to be able to comment on the originality of the approach though.
Some of the empirical results also seem promising.
Weaknesses
In addition to the list below, the paper suffers from severe stylistic and clarity issues; see the section below.
In Sections 4.1 & 4.2, the authors state four "theorems" that give certain variational bounds on the intractable terms in their proposed loss function in Eq 4. However, it is not entirely clear what the authors are claiming here as their original contribution. The statements in Thm 1 and 3 follow trivially using elementary information-theoretic arguments in two lines (Eqs 9 and 11 in the appendix). The statements of Thm 2 and 4, and 5 are essentially taken from other works with trivial substitutions. Could the authors please comment on what exactly they are claiming as their contribution here?
The authors never formally state the exact models used for their experiments, making their results impossible to interpret.
Clarity, Quality, Novelty And Reproducibility
The writing generally needs improvement. For example, there are countless typos, imprecise technical language and strange grammatical structures. However, there are also more major stylistic issues:
Much of the Introduction is dedicated to discussing related works in a way that doesn't directly pertain to the authors' proposed method. These parts should be moved into the related works section. The clearest example is the second paragraph, which should be (perhaps even verbatim) moved to Section 2.
Similarly, most of the first paragraph of Section 3 should also be moved to the Related Works section.
The authors' work seems to be very closely related to variational recurrent auto-encoders and variational state-space models, yet they are not discussed in the related works. Could the authors please comment on this?
In Section 3, the authors write: "A nonlinear CPIC refers to a stochastic nonlinear encoder including a nonlinear mean encoder and a linear variance encoder, while a linear CPIC refers to a stochastic linear encoder in which it replaces the linear mean encoder by a nonlinear mean encoder." - I highlight this particular sentence because it is supposed to describe the variants of CPIC that are later presented, but is very challenging to parse and probably contains errors.
In Thm 4, the definitions of "critic function" and "baseline function" are missing.
The authors only present results for the multi-sample upper bounds in the main text. Hence, I don't think presenting the univariate bounds is useful, and their discussion should be moved to the appendix.
Probably every section title in Section 5 should be reworded. In particular, "Numerical demonstration of the superiority of CPIC" should be renamed to "Results" or "Experiments".
"The motivation for using linear forecasting models is that good representations contribute to disentangling complex data in a linearly accessible way" - what does "linearly accessible way" mean?
Just below Eq 3: "
U
∈
R
N
×
D
" - what is
U
?
What is
f
1
and
γ
in Eq 8? In the sentence below, what is
β
?
Label font sizes in Figure 2 and the legend font sizes in Figure 3 should be increased as they are currently hard to read.
Instead of showing mean and best performance, Figure 3 should show median performance with the inter-quartile range as the error bars to give a better idea of performance.
"Finally, as the SNR gets lower (SNR = 0.001, bottom-right) all methods perform poorly, but we note that, numerically, considering predictive information in latent space is much better than that between latent and observation space." - why is stochasticity better numerically? There is no justification given for this claim. |